Troubleshooting API ML

Last Updated: 5/20/2019, 4:07:33 PM

Troubleshooting API ML

As an API Mediation Layer user, you may encounter problems with the functioning of API ML. This article presents known API ML issues and their solutions.

Enable API ML Debug Mode

Use debug mode to activate the following functions:

  • Display additional debug messages for the API ML
  • Enable changing log level for individual code components

Important: We highly recommend that you enable debug mode only when you want to troubleshoot issues. Disable debug mode when you are not troubleshooting. Running in debug mode while operating API ML can adversely affect its performance and create large log files that consume a large volume of disk space.

Follow these steps:

  1. Locate the following shell script files in the <Zowe install directory>/api-mediation/scripts directory:

  2. Open a file, for which you want to enable the debug mode.

  3. Find the line which contains the spring.profiles.include parameter and change the value to debug:

    -Dspring.profiles.include=debug \
  4. Restart Zowe.

    You have enabled the debug mode.

  5. (Optional) Reproduce a bug that causes issues and review debug messages. If you are unable to resolve the issue, contact CA Support.

  6. Disable the debug mode. Modify the line which contains the spring.profiles.include parameter back to default:

    -Dspring.profiles.include= \
  7. Restart Zowe.

    You have disabled the debug mode.

Change the Log Level of Individual Code Components

You can change the log level of a particular code component of the API ML internal service at run time.

Follow these steps:

  1. Enable API ML Debug Mode as described in Enable API ML Debug Mode. This activates the application/loggers endpoints in each API ML internal service (Gateway, Discovery Service, and Catalog).

  2. List the available loggers of a service by issuing the GET request for the given service URL:

    GET scheme://hostname:port/application/loggers


    • scheme

      API ML service scheme (http or https)

    • hostname

      API ML service hostname

    • port

      TCP port where API ML service listens on. The port is defined by the configuration parameter MFS_GW_PORT for the Gateway, MFS_DS_PORT for the Discovery Service (by default, set to gateway port + 1), and MFS_AC_PORT for the Catalog (by default, set to gateway port + 2).

    Exception: For the catalog you will able to get list the available loggers by issuing the GET request for the given service URL:

    GET [gateway-scheme]://[gateway-hostname]:[gateway-port]/api/v1/apicatalog/application/loggers

    Tip: One way to issue REST calls is to use the http command in the free HTTPie tool:


    HTTPie command:
    http GET
  3. Alternatively, you extract the configuration of a specific logger using the extended GET request:

    GET scheme://hostname:port/application/loggers/{name}


    • {name}

      is the logger name

  4. Change the log level of the given component of the API ML internal service. Use the POST request for the given service URL:

    POST scheme://hostname:port/application/loggers/{name}

    The POST request requires a new log level parameter value that is provided in the request body:

    "configuredLevel": "level"


    • level

      is the new log level: OFF, ERROR, WARN, INFO, DEBUG, TRACE


    http POST https://hostname:port/application/loggers/ configuredLevel=WARN

Known Issues

API ML stops accepting connections after z/OS TCP/IP stack is recycled


When z/OS TCP/IP stack is restarted, it is possible that the internal services of API Mediation Layer (Gateway, Catalog, and Discovery Service) stop accepting all incoming connections, go into a continuous loop, and write a numerous error messages in the log.

Sample message:

The following message is a typical error message displayed in STDOUT:

2018-Sep-12 12:17:22.850. -- Socket accept failed EDC5122I Input/output error. 

.at Method)
.at$ ~.tomcat-coyote-8.5.29.jar!/:8.5.29. 
.at .na:2.9 (12-15-2017).


Restart API Mediation Layer.

Tip: To prevent this issue from occurring, it is strongly recommended not to restart TCP/IP stack while the API ML is running.