/ Bug 3711 – Response pool use-after-free memory corruption error
Bug 3711 - Response pool use-after-free memory corruption error
: Response pool use-after-free memory corruption error
Status: CLOSED FIXED
Product: ProFTPD
core
: 1.3.3
: All All
: P1 blocker
Assigned To: proftpd development group
:
: Backport
:
:
  Show dependency treegraph
 
Reported: 2011-11-08 18:53 UTC by TJ Saunders
Modified: 2011-11-11 11:17 UTC (History)
6 users (show)

See Also:


Attachments
Fixes bug (726 bytes, patch)
2011-11-09 17:30 UTC, TJ Saunders
Details

Note You need to log in before you can comment on or make changes to this bug.
Description TJ Saunders 2011-11-08 18:53:19 UTC
The Zero Day Initiative (ZDI) has disclosed a use-after-free memory corruption
error in the proftpd Response API code, via the security@proftpd.org address:

From: ZDI Disclosures <zdi-disclosures@tippingpoint.com>
Subject: ZDI-CAN-1420: New Vulnerability Report
Date: October 28, 2011 19:23:25 GMT+02:00
To: security@proftpd.org

ZDI-CAN-1420: ProFTPD Response Pool Use-After-Free Remote Code Execution
Vulnerability


-- CVSS -----------------------------------------

9, AV:N/AC:L/Au:S/C:C/I:C/A:C


-- ABSTRACT -------------------------------------

TippingPoint has identified a vulnerability affecting the following products:

  ProFTPD FTP Server


-- VULNERABILITY DETAILS ------------------------

This vulnerability is located within the ProFTPd daemon and occurs due to the
way the server manages pools that are used for responses send by the server to
the client. When attempting to handle an exceptional condition the server will
fail to restore a pointer that is used to contain an ftp response, and as such
can be used to trigger a controlled memory corruption.

The core of this vulnerability is described in the following function which is
located in src/main.c. The pr_cmd_dispatch_phase function is responsible for
dispatching calls to any of the commands that are registered in the proftpd
modules/ list. Upon entry of this function, the server essentially pushes the
state of the resp_pool for it to be restored upon return. However, if an error
occurs while executing a precmd the server will fail to restore the state.
These are done with the pr_response_get_pool() and pr_response_set_pool(...)
functions.

src/main.c:659
int pr_cmd_dispatch_phase(cmd_rec *cmd, int phase, int flags) {
 char *cp = NULL;
 int success = 0;
 pool *resp_pool = NULL;   // XXX
...
 /* Get any previous pool that may be being used by the Response API.
  *
  * In most cases, this will be NULL.  However, if proftpd is in the
  * midst of a data transfer when a command comes in on the control
  * connection, then the pool in use will be that of the data transfer
  * instigating command.  We want to stash that pool, so that after this
  * command is dispatched, we can return the pool of the old command.
  * Otherwise, Bad Things (segfaults) happen.
  */
 resp_pool = pr_response_get_pool();       // XXX: local that's cmd->pool

 /* Set the pool used by the Response API for this command. */
 pr_response_set_pool(cmd->pool);          // XXX

...
 if (phase == 0) {

   /* First, dispatch to wildcard PRE_CMD handlers. */
   success = _dispatch(cmd, PRE_CMD, FALSE, C_ANY);

   if (!success)      /* run other pre_cmd */
     success = _dispatch(cmd, PRE_CMD, FALSE, NULL);

   if (success < 0) {

     /* Dispatch to POST_CMD_ERR handlers as well. */

     _dispatch(cmd, POST_CMD_ERR, FALSE, C_ANY);
     _dispatch(cmd, POST_CMD_ERR, FALSE, NULL);

     _dispatch(cmd, LOG_CMD_ERR, FALSE, C_ANY);
     _dispatch(cmd, LOG_CMD_ERR, FALSE, NULL);

     pr_response_flush(&resp_err_list);
     return success;       // XXX
   }
...
 } else {
   switch (phase) {
     case PRE_CMD:
     case POST_CMD:
     case POST_CMD_ERR:
       success = _dispatch(cmd, phase, FALSE, C_ANY);
       if (!success)
         success = _dispatch(cmd, phase, FALSE, NULL);
       break;

     case CMD:
       success = _dispatch(cmd, phase, FALSE, C_ANY);
       if (!success)
         success = _dispatch(cmd, phase, TRUE, NULL);
       break;

     case LOG_CMD:
     case LOG_CMD_ERR:
       (void) _dispatch(cmd, phase, FALSE, C_ANY);
       (void) _dispatch(cmd, phase, FALSE, NULL);
       break;

     default:
       errno = EINVAL;
       return -1;      // XXX: skips last state
   }
...
 /* Restore any previous pool to the Response API. */
 pr_response_set_pool(resp_pool);  // XXX: local

 return success;
}

In order to reach this code, one will need to put the server into a state where
more than one pool can exist. This can be done by starting an ftp data transfer
via xfer_stor, or xfer_recv. Once inside a data transfer, the server will then
enter the pr_data_xfer function with a valid pool. Immediately after, the
server will return the old pool back to proftpd's allocation list but still
globally retain a reference to it as a response buffer. The next time a buffer
is allocated, the server will return this memory back to the caller. If a
response occurs, this will overwrite data that was allocated triggerring memory
corruption.

src/data.c:875
int pr_data_xfer(char *cl_buf, int cl_size) {
 int len = 0;
 int total = 0;
 int res = 0;

 /* Poll the control channel for any commands we should handle, like
  * QUIT or ABOR.
  */
...
     for (ch = cmd->argv[0]; *ch; ch++)
       *ch = toupper(*ch);

     /* Only handle commands which do not involve data transfers; we
      * already have a data transfer in progress.  For any data transfer
      * command, send a 450 ("busy") reply.  Looks like almost all of the
      * data transfer commands accept that response, as per RFC959.
      *
      * We also prevent the EPRT, EPSV, PASV, and PORT commands, since
      * they will also interfere with the current data transfer.  In doing
      * so, we break RFC compliance a little; RFC959 does not allow a
      * response code of 450 for those commands (although it should).
      */
     if (strcmp(cmd->argv[0], C_APPE) == 0 ||
         strcmp(cmd->argv[0], C_LIST) == 0 ||
         strcmp(cmd->argv[0], C_MLSD) == 0 ||
         strcmp(cmd->argv[0], C_NLST) == 0 ||
         strcmp(cmd->argv[0], C_RETR) == 0 ||
         strcmp(cmd->argv[0], C_STOR) == 0 ||
         strcmp(cmd->argv[0], C_STOU) == 0 ||
         strcmp(cmd->argv[0], C_RNFR) == 0 ||
         strcmp(cmd->argv[0], C_RNTO) == 0 ||
         strcmp(cmd->argv[0], C_PORT) == 0 ||
         strcmp(cmd->argv[0], C_EPRT) == 0 ||
         strcmp(cmd->argv[0], C_PASV) == 0 ||
         strcmp(cmd->argv[0], C_EPSV) == 0) {
       pool *resp_pool;
...
     } else if (strcmp(cmd->argv[0], C_NOOP) == 0) {
...
     } else {
       pr_cmd_dispatch(cmd);       // XXX
...
       destroy_pool(cmd->pool);    // XXX
...
 return (len < 0 ? -1 : len);
}


-- CREDIT ---------------------------------------

This vulnerability was discovered by:

  Anonymous

-- FURTHER DETAILS ------------------------------

If supporting files were contained with this report they are provided within a
password protected ZIP file. The password is the ZDI candidate number in the
form: ZDI-CAN-XXXX where XXXX is the ID number.

Please confirm receipt of this report. We expect all vendors to remediate ZDI
vulnerabilities within 180 days of the reported date. If you are ready to
release a patch at any point leading up the the deadline please coordinate with
us so that we may release our advisory detailing the issue. If the 180 day
deadline is reached and no patch has been made available we will release a
limited public advisory with our own mitigations so that the public can protect
themselves in the absence of a patch. Please keep us updated regarding the
status of this issue and feel free to contact us at any time:

Zero Day Initiative
zdi-disclosures@tippingpoint.com

The PGP key used for all ZDI vendor communications is available from:

    http://www.zerodayinitiative.com/documents/zdi-pgp-key.asc

-- INFORMATION ABOUT THE ZDI ---------------------

Established by TippingPoint, The Zero Day Initiative (ZDI) represents a
best-of-breed model for rewarding security researchers for responsibly
disclosing discovered vulnerabilities.

The ZDI is unique in how the acquired vulnerability information is used.
TippingPoint does not re-sell the vulnerability details or any exploit code.
Instead, upon notifying the affected product vendor, TippingPoint provides its
customers with zero day protection through its intrusion prevention technology.
Explicit details regarding the specifics of the vulnerability are not exposed
to any parties until an official vendor patch is publicly available.
Furthermore, with the altruistic aim of helping to secure a broader user base,
TippingPoint provides this vulnerability information confidentially to security
vendors (including competitors) who have a vulnerability protection or
mitigation product.

Please contact us for further information or refer to:

   http://www.zerodayinitiative.com

-- DISCLOSURE POLICY ----------------------------

Our vulnerability disclosure policy is available online at:

   http://www.zerodayinitiative.com/advisories/disclosure_policy/
Comment 1 TJ Saunders 2011-11-09 17:30:58 UTC
Created attachment 3676 [details]
Fixes bug
Comment 2 TJ Saunders 2011-11-09 17:37:53 UTC
Patch committed to CVS, and backported to 1.3.3 branch.
Comment 3 TJ Saunders 2011-11-10 17:43:06 UTC
Resolved in 1.3.4.
Comment 4 TJ Saunders 2011-11-10 23:54:28 UTC
This issue has been assigned CVE-2011-4130, for future reference.
Comment 5 Christian Wittmer 2011-11-11 10:54:29 UTC
is it fixed in 1.3.3g too ?
Comment 6 Paul Howarth 2011-11-11 11:17:23 UTC
(In reply to comment #5)
> is it fixed in 1.3.3g too ?

Yes, see: http://www.proftpd.org/docs/NEWS-1.3.3g

Not sure why it didn't make the release notes.