Khan Engineering

Khan Engineering

We're the engineers behind Khan Academy. We're building a free, world-class education for anyone, anywhere.


Latest posts

Making Websites Work with Windows High Contrast Mode

Diedra Rater on March 21

Kotlin for Python developers

Aasmund Eldhuset on Nov 29, 2018

Using static analysis in Python, JavaScript and more to make your system safer

Kevin Dangoor on Jul 26, 2018

Kotlin on the server at Khan Academy

Colin Fuller on Jun 28, 2018

The Original Serverless Architecture is Still Here

Kevin Dangoor on May 31, 2018

What do software architects at Khan Academy do?

Kevin Dangoor on May 14, 2018

New data pipeline management platform at Khan Academy

Ragini Gupta on Apr 30, 2018

Untangling our Python Code

Carter J. Bastian on Apr 16, 2018

Slicker: A Tool for Moving Things in Python

Ben Kraft on Apr 2, 2018

The Great Python Refactor of 2017 And Also 2018

Craig Silverstein on Mar 19, 2018

Working Remotely

Scott Grant on Oct 2, 2017

Tips for giving your first code reviews

Hannah Blumberg on Sep 18, 2017

Let's Reduce! A Gentle Introduction to Javascript's Reduce Method

Josh Comeau on Jul 10, 2017

Creating Query Components with Apollo

Brian Genisio on Jun 12, 2017

Migrating to a Mobile Monorepo for React Native

Jared Forsyth on May 29, 2017

Memcached-Backed Content Infrastructure

Ben Kraft on May 15, 2017

Profiling App Engine Memcached

Ben Kraft on May 1, 2017

App Engine Flex Language Shootout

Amos Latteier on Apr 17, 2017

What's New in OSS at Khan Academy

Brian Genisio on Apr 3, 2017

Automating App Store Screenshots

Bryan Clark on Mar 27, 2017

It's Okay to Break Things: Reflections on Khan Academy's Healthy Hackathon

Kimerie Green on Mar 6, 2017

Interning at Khan Academy: from student to intern

Shadaj Laddad on Dec 12, 2016

Prototyping with Framer

Nick Breen on Oct 3, 2016

Evolving our content infrastructure

William Chargin on Sep 19, 2016

Building a Really, Really Small Android App

Charlie Marsh on Aug 22, 2016

A Case for Time Tracking: Data Driven Time-Management

Oliver Northwood on Aug 8, 2016

Time Management at Khan Academy

Several Authors on Jul 25, 2016

Hackathons Can Be Healthy

Tom Yedwab on Jul 11, 2016

Ensuring transaction-safety in Google App Engine

Craig Silverstein on Jun 27, 2016

The User Write Lock: an Alternative to Transactions for Google App Engine

Craig Silverstein on Jun 20, 2016

Khan Academy's Engineering Principles

Ben Kamens on Jun 6, 2016

Minimizing the length of regular expressions, in practice

Craig Silverstein on May 23, 2016

Introducing SwiftTweaks

Bryan Clark on May 9, 2016

The Autonomous Dumbledore

Evy Kassirer on Apr 25, 2016

Engineering career development at Khan Academy

Ben Eater on Apr 11, 2016

Inline CSS at Khan Academy: Aphrodite

Jamie Wong on Mar 29, 2016

Starting Android at Khan Academy

Ben Komalo on Feb 29, 2016

Automating Highly Similar Translations

Kevin Barabash on Feb 15, 2016

The weekly snippet-server: open-sourced

Craig Silverstein on Feb 1, 2016

Stories from our latest intern class

2015 Interns on Dec 21, 2015

Kanbanning the LearnStorm Dev Process

Kevin Dangoor on Dec 7, 2015

Forgo JS packaging? Not so fast

Craig Silverstein on Nov 23, 2015

Switching to Slack

Benjamin Pollack on Nov 9, 2015

Receiving feedback as an intern at Khan Academy

David Wang on Oct 26, 2015

Schrödinger's deploys no more: how we update translations

Chelsea Voss on Oct 12, 2015

i18nize-templates: Internationalization After the Fact

Craig Silverstein on Sep 28, 2015

Making thumbnails fast

William Chargin on Sep 14, 2015

Copy-pasting more than just text

Sam Lau on Aug 31, 2015

No cheating allowed!!

Phillip Lemons on Aug 17, 2015

Fun with slope fields, css and react

Marcos Ojeda on Aug 5, 2015

Khan Academy: a new employee's primer

Riley Shaw on Jul 20, 2015

How wooden puzzles can destroy dev teams

John Sullivan on Jul 6, 2015

Babel in Khan Academy's i18n Toolchain

Kevin Barabash on Jun 22, 2015

tota11y - an accessibility visualization toolkit

Jordan Scales on Jun 8, 2015


The User Write Lock: an Alternative to Transactions for Google App Engine

by Craig Silverstein on Jun 20, 2016

Transactions are the standard method in Google App Engine's datastore -- as in many databases -- to ensure atomicity. This is true for both atomic writes of multiple entities, and atomic writes of a single entity:

def safe_watched_video(user_id, video):
     user = GetUser(user_id)
     user.num_videos_watched += 1
     video_log = GetUserVideoLog(user)

def safe_login_counter(user_id):
     user = GetUser(user_id)
     user.num_logins += 1

Transactions work by checking all the entities that you plan on put()-ing at both the beginning and end of the function. If someone else had changed an entity in the datastore while the function was running, the transaction rolls back the function and retries.

When people think of transactions, they often think of atomically updating multiple entities; a transaction ensures that either both the video-log and the user are updated, or neither is. But the "someone else can modify" problem exists just as much for a single entity, which is why a transaction is needed even for safe_login_counter. Without it, if two processes called safe_login_counter() at the same time with the same user_id, the db operations might end up interleaved in such a way that causes num_logins to end up incremented by 1, not by 2.

The trouble with transactions

While transactions are effective, they can be hard to use, particularly within Google App Engine. One problem is that transactions "freeze" the state of the datastore at the time the transaction starts, yielding surprising results. Suppose you have a function like this:

def login_twice(user_id):
    login_info = GetLoginInfo(user_id)
    login_info.last_login =
    # Don't ask why, but we need to call this twice

You might expect this to update num_logins by 2, but in fact it only updates it by 1, because when safe_login_counter calls get(), those call see the "frozen" datastore as it existed at the beginning of login_twice, ignoring the put()'s that we did.

While this example is somewhat contrived, such a scenario does come up (typically, as here, with nested transactions), and can be hard to debug. It comes up particularly with the older db interface, since ndb does some caching to try to ameliorate the problem.

A second problem is that code inside transactions must be idempotent, since it is can be called more than once if the transaction needs to retry. This is particularly difficult to ensure when using transaction in relatively high-level code, which calls other functions that may not be aware they're being called inside a transaction.

def update_points_after_watching_video(self, video):
     # protect against this being called twice.
     key = 'points_updated.%s.%s' % (,
     if not memcache.get(key):
         self.points += video.points_value
         memcache.set(key, True)   # UNSAFE

def update_after_watching_video(user_id, video):
    user = GetUser(user_id)
    video_log = GetUserVideoLog(user)

An equally serious problem is that you cannot do a datastore query inside a transaction:

def update_videos_seen(user_id):
    user = GetUser(user_id)
    video_query = UserVideoLog.query().filter(
        UserVideoLog.user == user,
    user.video_count += video_query.count()  # ILLEGAL

This restriction has hit Khan Academy hard. As with the idempotence requirement, it's particularly onerous when using transactions in higher-level code, which calls helper functions that do queries. We have one API endpoint that we call when a user successfully completes an exercise, which updates a lot of state: the list of exercises a user has done, of course, but also their proficiency model, list of future recommended exercises, user notifications, etc. Many of these updates require datastore queries, often in helper functions buried deep in the call-stack. As a result, we cannot use transactions to atomically update the user-state upon exercise completion.

Locking: a transaction alternative

At Khan Academy, we have developed an alternative to transactions, which we are introducing today, complete with source code. This alternative uses a simple locking scheme in lieu of transactions. It works with transactions: for a given datastore model, you can decide if you want to use transactions or locking to protect its atomicity. If you have a function that updates two entities, one of which requires a transaction and the other of which requires a lock, you can make your function both transactional and lock-acquiring; the two can co-exist.

Locks are a simple way to ensure atomicity for a single entity: while you hold the lock for an entity, nobody else can read or write the entity. It is easy to see how this ensures atomic operation. (This is a simplification; I discuss below how we allow concurrent read-only operations while a lock is being held.)

As this simple description makes clear, locks have several disadvantages:

  • Unless your locks are very fine-grained, forcing other applications to wait while you hold a lock is less efficient than transactions
  • When acquiring multiple locks, you have to worry about deadlock
  • Locks do not use rollbacks, meaning they cannot guarantee "both or neither" update semantics
  • Writing a global lockservice is actually quite hard

These problems are all solvable, as described below. And the advantages locks have -- intuitive data-access, no restrictions on datastore queries -- make them very appealing in the right situations.

How our locks work

The pubic API is very simple:

class FinishedExercise(RequestHandler):
    def get(self, user_id):
        with lock_util.global_lock(key=user_id,
            user = GetUser(user_id)

Now the finished-exercise API call will only run once it has acquired the lock for user_id from the lockservice. In the meantime, no other code that acquires the lock for user_id can run until this routine has finished. If we are unable to acquire the lock within 5 seconds, global_lock raises an exception.

While I describe this mechanism as a "lock" mechanism, it would be more accurately described as a lease: when you acquire the lock, the lockservice only promises it to you for a certain amount of time. This works well in the context of App Engine because App Engine will abort a request that takes too long (a minute for "frontend" requests, 10 minutes for "backend" requests). We hard-code that knowledge into lock_util, though you can override it using the lock_timeout parameter. By using a leasing model, we ensure that a bug or networking error can't cause a lock to be held forever.

The locking code logs when a lock was acquired and released, and if there was contention when acquiring it. This has had a surprising benefit to us: it's made it significantly easier to diagnose the sources of contention than it was with transactions.

When do locks make sense?

Transactions are more efficient than locks because typically locks are too coarse. At Khan Academy, we use a single lock for a user, which protects all datastore entities associated with that user. So if one function wants to update a UserData entity, while the other wants to update a UserVideoLog entity, they could both run in parallel in a transaction world but would block each other in a locking world.

A per-user lock works well for us because typically a Khan Academy user is only doing one thing at a time. So it's unlikely that two processes would want to update two different user-specific entities in parallel. (But see "batch processing," below.)

In general we consider transactions to be superior to locks when they work: when a model only needs protection in idempotent, low-level, query-free code. When possible, we'll use transacations even for per-user models. Likewise, we've never replaced existing uses of transactions with locks. Instead, we use locks to protect code that previously ran entirely unprotected.

In our experience, a user lock is the most useful kind of lock to have: it's unusual for two parallel requests to both be modifying information about a single user, and there are typically many datastore entities associated with a single user. Acquiring a single lock for the user at the beginning of the request can yield a lot of datastore safety with very little cognitive cost.

Avoiding deadlock

Whenever you acquire multiple locks at once, you have the potential for deadlock. At Khan Academy we attempt to avoid deadlock by having a canonical order in which we acquire locks: first we acquire the lock for the "current" user, then any children they have, then their coach, etc. If that doesn't work, the deadlock will be broken when a lease expires.

That said, only rarely do we need to acquire multiple locks in a single request. If we used more kinds of locks than just a user lock, this might become more of an issue.

Using locks with transactions

Locks do not guarantee "both or neither" semantics. Consider this code:

def watched_video_locked(user_id, video):
     with lock_util.acquire_lock(user_id):
         user = GetUser(user_id)
         user.num_videos_watched += 1
         video_log = GetUserVideoLog(user)

If add_watched_video throws an exception, we'll end up with num_videos_watched being updated but the video_log not being updated.

The solution is simple: use a transaction as well.

def watched_video_locked(user_id, video):
     with lock_util.acquire_lock(user_id):

If we still have to use a transaction, what benefit is the lock? The answer is: we can use transactions more sparingly, in places where their restrictions are less onerous. In particular, at Khan Academy we now use transactions only in low-level code, after we've done the datastore queries we need to do, and in addition we do not have to worry about all the problems (described above) that arise with "nested" transactions.

Our global lockservice: memcache

Implementing a reliable, global lockservice is not a trivial task. Google has one it uses internally, called Chubby, but it is not made available to App Engine users. So we had to develop our own, application-layer lockservice. We did it using memcache.

On the face of it, memcache is a terrible choice for a lockservice, because by design it does not guarantee that inserted items remain in the cache. It's not very useful to have a lockservice that's always forgetting about your locks!

But in practice, this has not been a problem. Locks are very small, so are unlikely to fill a memcache by themselves. And any reasonable memcache implementation is going to evict old entries before new ones. Locks, by our design, are short-lived -- 10 minutes at most -- so are unlikely to be evicted. (To be sure, I would feel less confident were Khan Academy using a shared memcache instead of a dedicated instance.)

The bigger problem with memcache is that you communicate with it over the network, and the network can be flaky. While memcache evictions have not been a problem, failed memcache writes are a fact of life for us. We have to be careful handling them, since we do not want our entire site to go down if memcache is unreachable. We've chosen to continue optimistically -- assume we have the lock even though memcache has not confirmed it -- risking the potential for non-atomic writes in order to gain site reliability.

On the plus side, memcache supports the operations a locking service needs: atomic reads and writes, shared state across processes, and fast access.

We implement the locking via busy-waiting: we do an atomic memcache-create of the lock-key, and if it fails (meaning someone else is holding the lock), we wait a second and then try again. Once it succeeds, we hold the lock. At the end of the request, we do an atomic memcache-delete to release the lock. The "lease" functionality comes automatically with memcache: when we create the memcache entry, we do so with a timeout, causing memcache to automatically flush the entry if we do not manually delete it first.

Batch processing

I mentioned above that the user-lock works well for us because typically only one process is trying to access a single user at a time. There's one big exception to this rule though: batch processes that run over all our users. For instance, we have a nightly cronjob that can award users time-based badges ("You've used Khan Academy every day this month!"). It's very possible for a user's interactive requests to run at the same time as the batch request hits that user.

To maintain interactive behavior in that situation, we have a simple form of lock priority: locks can either say they are for interactive tasks, or batch tasks. (In practice, we determine this automatically based on the url of the current request, but it could easily be an input parameter.)

We implement the priority scheme by having batch processes wait 1.1 seconds before first trying to acquire the lock. Since any concurrent client for the lock is retrying the memcache-create once a second, this guarantees anyone else waiting will have a chance to acquire the lock before the batch process does. While interactive processes can still starve each other, batch processes cannot starve interactive ones.

Allowing read-only datastore access

There remains one final situation I promised to address: what to do with read-only access. You don't need to acquire a lock for read-only access, and in general you don't want to, so that two processes can read the same data the same time. The trouble is the code can't tell a priori that an access is going to be read-only (and indeed, a data access in a low-level function might be read-only in some cases but part of a read-write pattern in others). We could require applications to tell the locking system if an access was read-only or not, but that's onerous and brittle. We'd like to do what transactions do, and automatically figure out if a read was read-only or not at write-time, and to handle things appropriately in either case.

We can't do that, but we can get close, by using runtime assertions.

Our basic scheme is to allow a process to read an entity without holding the lock. That is, we assume all reads are made by read-only processes. But we also keep track of the fact that that process has read that entity without holding a lock. If the process then ever tries to put() the entity, we notice it had done a prior read without a lock, and complain loudly at that time.

The fundamental idea here is that a process has to know if an entity-read might lead to a subsequent write. If so, it is the application's responsibility to acquire the lock before the read. We have some runtime checks to make sure that happens.

But wait, there's more...

How these runtime checks are implemented in App Engine is a fascinating story in its own right. Stay tuned for our next blog post, where I talk about that in a lot more detail.

In the meantime, if you're interested in the source code for our locking scheme, have at it! The uber-high-level fetch_under_user_write_lock() function requires some code from the next blog post, but the rest of it is usable today.