Troubleshooting API ML

Last Updated: 3/1/2019, 3:12:01 AM

Troubleshooting API ML

As an API Mediation Layer user, you may encounter problems with the functioning of API ML. This article presents known API ML issues and their solutions.

Enable API ML Debug Mode

Use debug mode to activate the following functions:

  • Display additional debug messages for the API ML
  • Enable changing log level for individual code components

Important: We highly recommend that you enable debug mode only when you want to troubleshoot issues. Disable debug mode when you are not troubleshooting. Running in debug mode while operating API ML can adversely affect its performance and create large log files that consume a large volume of disk space.

Follow these steps:

  1. Set the MFS_LOG_LEVEL parameter to "debug" in the MFSxPRM member. The member resides in the RUNHLQ.CMFSOPTN data set.
  2. Restart the API ML internal services (Gateway, Discovery Service, and Catalog) as applicable to the problem that you are troubleshooting. You successfully enabled debug mode.
  3. Repeat the procedure that initially caused the problem.
  4. Review the debug messages and contact Support, if necessary.
  5. After you finish troubleshooting the error, set the MFS_LOG_LEVEL parameter back to the initial setting:
  6. Restart all API ML services (Gateway, Discovery Service, and Catalog). You successfully disabled debug mode.

Change the Log Level of Individual Code Components

You can change the log level of a particular code component of the API ML internal service at run time.

Follow these steps:

  1. Enable API ML Debug Mode as described in Enable API ML Debug Mode. This activates the application/loggers endpoints in each API ML internal service (Gateway, Discovery Service, and Catalog).

  2. List the available loggers of a service by issuing the GET request for the given service URL:

    GET scheme://hostname:port/application/loggers


    • scheme

      API ML service scheme (http or https)

    • hostname

      API ML service hostname

    • port

      TCP port where API ML service listens on. The port is defined by the configuration parameter MFS_GW_PORT for the Gateway, MFS_DS_PORT for the Discovery Service (by default, set to gateway port + 1), and MFS_AC_PORT for the Catalog (by default, set to gateway port + 2).

    Tip: One way to issue REST calls is to use the http command in the free HTTPie tool:


    HTTPie command:
    http GET
  3. Alternatively, you extract the configuration of a specific logger using the extended GET request:

    GET scheme://hostname:port/application/loggers/{name}


    • {name}

      is the logger name

  4. Change the log level of the given component of the API ML internal service. Use the POST request for the given service URL:

    POST scheme://hostname:port/application/loggers/{name}

    The POST request requires a new log level parameter value that is provided in the request body:

    "configuredLevel": "level"


    • level

      is the new log level: OFF, ERROR, WARN, INFO, DEBUG, TRACE


    http POST https://hostname:port/application/loggers/ configuredLevel=WARN

Known Issues

API ML stops accepting connections after z/OS TCP/IP stack is recycled


When z/OS TCP/IP stack is restarted, it is possible that the internal services of API Mediation Layer (Gateway, Catalog, and Discovery Service) stop accepting all incoming connections, go into a continuous loop, and write a numerous error messages in the log.

Sample message:

The following message is a typical error message displayed in STDOUT:

2018-Sep-12 12:17:22.850. -- Socket accept failed EDC5122I Input/output error. 

.at Method)
.at$ ~.tomcat-coyote-8.5.29.jar!/:8.5.29. 
.at .na:2.9 (12-15-2017).


Restart API Mediation Layer.

Tip: To prevent this issue from occurring, it is strongly recommended not to restart TCP/IP stack while the API ML is running.