Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[2.7] bpo-38106: Fix race in PyThread_release_lock that can lead to memory corruption and deadlock #16047

Merged
merged 1 commit into from Oct 3, 2019

Commits on Oct 2, 2019

  1. [2.7] bpo-38106: Fix race in PyThread_release_lock that can lead to m…

    …emory corruption and deadlock
    
    On Darwin, even though this is considered as POSIX, Python uses
    mutex+condition variable to implement its lock, and, as of 20190828, Py2.7
    implementation, even though similar issue was fixed for Py3 in 2012, contains
    synchronization bug: the condition is signalled after mutex unlock while the
    correct protocol is to signal condition from under mutex:
    
      https://github.com/python/cpython/blob/v2.7.16-127-g0229b56d8c0/Python/thread_pthread.h#L486-L506
      python@187aa545165d (py3 fix)
    
    PyPy has the same bug for both pypy2 and pypy3:
    
      https://bitbucket.org/pypy/pypy/src/578667b3fef9/rpython/translator/c/src/thread_pthread.c#lines-443:465
      https://bitbucket.org/pypy/pypy/src/5b42890d48c3/rpython/translator/c/src/thread_pthread.c#lines-443:465
    
    Signalling condition outside of corresponding mutex is considered OK by
    POSIX, but in Python context it can lead to at least memory corruption if we
    consider the whole lifetime of python level lock. For example the following
    logical scenario:
    
          T1                                          T2
    
      sema = Lock()
      sema.acquire()
    
                                                  sema.release()
    
      sema.acquire()
      free(sema)
    
      ...
    
    can translate to the next C-level calls:
    
          T1                                          T2
    
      # sema = Lock()
      sema = malloc(...)
      sema.locked = 0
      pthread_mutex_init(&sema.mut)
      pthread_cond_init (&sema.lock_released)
    
      # sema.acquire()
      pthread_mutex_lock(&sema.mut)
      # sees sema.locked == 0
      sema.locked = 1
      pthread_mutex_unlock(&sema.mut)
    
                                                  # sema.release()
                                                  pthread_mutex_lock(&sema.mut)
                                                  sema.locked = 0
                                                  pthread_mutex_unlock(&sema.mut)
    
                          # OS scheduler gets in and relinquishes control from T2
                          # to another process
                                                  ...
    
      # second sema.acquire()
      pthread_mutex_lock(&sema.mut)
      # sees sema.locked == 0
      sema.locked = 1
      pthread_mutex_unlock(&sema.mut)
    
      # free(sema)
      pthread_mutex_destroy(&sema.mut)
      pthread_cond_destroy (&sema.lock_released)
      free(sema)
    
      # ...
      e.g. malloc() which returns memory where sema was
    
                                                  ...
                          # OS scheduler returns control to T2
                          # sema.release() continues
                          #
                          # BUT sema was already freed and writing to anywhere
                          # inside sema block CORRUPTS MEMORY. In particular if
                          # _another_ python-level lock was allocated where sema
                          # block was, writing into the memory can have effect on
                          # further synchronization correctness and in particular
                          # lead to deadlock on lock that was next allocated.
                                                  pthread_cond_signal(&sema.lock_released)
    
    Note that T2.pthread_cond_signal(&sema.lock_released) CORRUPTS MEMORY as it
    is called when sema memory was already freed and is potentially
    reallocated for another object.
    
    The fix is to move pthread_cond_signal to be done under corresponding mutex:
    
      # sema.release()
      pthread_mutex_lock(&sema.mut)
      sema.locked = 0
      pthread_cond_signal(&sema.lock_released)
      pthread_mutex_unlock(&sema.mut)
    
    To do so this patch cherry-picks thread_pthread.h part of the following 3.2 commit:
    
    commit 187aa54
    Author: Kristján Valur Jónsson <kristjan@ccpgames.com>
    Date:   Tue Jun 5 22:17:42 2012 +0000
    
        Signal condition variables with the mutex held.  Destroy condition variables
        before their mutexes.
    
     Python/ceval_gil.h      |  9 +++++----
     Python/thread_pthread.h | 15 +++++++++------
     2 files changed, 14 insertions(+), 10 deletions(-)
    
    (ceval_gil.h is Python3 specific and does not apply to Python2.7)
    
    --------
    
    Bug history
    
    The bug was there since 1994 - since at least [1]. It was discussed in 2001
    with original code author[2], but the code was still considered to be
    race-free. In 2010 the place where pthread_cond_signal should be - before or
    after pthread_mutex_unlock - was discussed with the rationale to avoid
    threads bouncing[3,4,5], and in 2012 pthread_cond_signal was moved to be
    called from under mutex, but only for CPython3[6,7].
    
    In 2019 the bug was (re-)discovered while testing Pygolang[8] on macOS with
    CPython2 and PyPy2 and PyPy3.
    
    [1] python@2c8cb9f3d240
    [2] https://bugs.python.org/issue433625
    [3] https://bugs.python.org/issue8299#msg103224
    [4] https://bugs.python.org/issue8410#msg103313
    [5] https://bugs.python.org/issue8411#msg113301
    [6] https://bugs.python.org/issue15038#msg163187
    [7] python@187aa545165d
    [8] https://pypi.org/project/pygolang
    navytux committed Oct 2, 2019