03 Dec 2018 | Peter Stöckli
Alphabot Security has looked at a bunch of popular Java communication libraries to check whether they verify that the hostname of the server they connect to is valid for the presented certificate.
Following Java libraries with missing hostname verification were found:
Improper Validation of Certificate with Host Mismatch (CWE-297) is described as follows:
The software communicates with a host that provides a certificate, but the software does not properly ensure that the certificate is actually associated with that host. Even if a certificate is well-formed, signed, and follows the chain of trust, it may simply be a valid certificate for a different site than the site that the software is interacting with.
If the certificate's host-specific data is not properly checked - [..] - it may be possible for a redirection or spoofing attack to allow a malicious host with a valid certificate to provide data, impersonating a trusted host. In order to ensure data integrity, the certificate must be valid and it must pertain to the site that is being accessed.
Unfortunately, this kind of vulnerability is very common in the Java world since certificate verification and hostname verification are treated as two different parts, when in practice some sort of hostname verification is necessary to prevent MITM-attacks on all sorts of different protocols conveyed via TLS. (E.g. see RFC 2818/HTTP Over TLS).
Google provides some good documentation regarding Java and hostname verification with a focus on Android apps, including following warning:
Caution: SSLSocket does not perform hostname verification. It is up to your app to do its own hostname verification, preferably by calling getDefaultHostnameVerifier() with the expected hostname. Further beware that HostnameVerifier.verify() doesn't throw an exception on error but instead returns a boolean result that you must explicitly check.
Since Java 7 there’s another way of setting up hostname verification for libraries that require HTTPS-like hostname verification by calling
with the string-value
'HTTPS' on the
SSLParameters of the
Important: If you use SSLContexts in your code always write tests that ensure that hostname verification works as expected.
The suboptimal Java API is often mirrored by libraries that use it, so that the user of the library has to set up hostname verification by himself. In our opinion the sensible thing to do for a library is to be secure by default, whilst allowing the user to turn off security features he deems unnecessary in his specific case.
The Spring RabbitMQ Java Client (also known as Spring-AMQP) uses the official RabbitMQ Java Client to connect to RabbitMQ.
The official rabbitmq-java-client had some suboptimal API which only allowed to enable hostname verification
by providing an own SSLContext. In defense of the library it has JavaDoc on methods like
useSslProtocol() that state:
not recommended to use in production as it provides no protection against man-in-the-middle attacks.
However, the Spring RabbitMQ Java Client did not provide an own SSLContext and was as such never protected against MITM-attacks.
The rabbitmq-java-client has since implemented the method
enableHostnameVerification(), which makes it easier to enable
The mitigation as described in the advisory:
- Upgrade to the 1.7.10.RELEASE or 2.0.6.RELEASE and set the enableHostnameValidation property to true. Override the transitive amqp-client version to at least 4.8.0 and 5.4.0, respectively
- The upcoming 2.1.0.RELEASE will set the property to true by default.
- If you are using the amqp-client library directly to create a connection factory, refer to its javadocs for the enableHostnameValidation() method.
The Apache ActiveMQ Client simply did not have hostname verification. This was fixed in Apache ActiveMQ 5.15.6, enabling hostname verification by default. So if you’re connecting to Amazon MQ or a similar service using the ActiveMQ Client you should upgrade to version 5.15.6 or later.
The Jetty WebSocket client before 9.4.12 had an
SslContextFactory configured that was potentially initialized without hostname verification.
Since version 9.4.12 Jetty does provide an
SslContextFactory with TLS hostname verification enabled.
Users of the Spring Frameworks that use the JettyWebSocketClient should upgrade to a framework version which includes a Jetty version of 9.4.12 or later.
So if you are using an older version of the Jetty WebSocket client you have to explicitly configure the SslContextFactory to get TLS hostname verification or simply upgrade your Jetty version to 9.4.12 and later.
23 Jul 2018 | Peter Stöckli
On July the 22nd the Apache Tomcat team released more information about three security vulnerabilities worth mentioning. They have already fixed the vulnerabilities in previous patch releases. Those three vulnerabilities are:
The different vulnerabilities affect the Tomcat 7.0.x, 8.5.x and 9.0.x versions. (Older versions of Tomcat (e.g. 6.0.x and older) are EOL (End of life). The Tomcat 8.0.x line is also EOL.) Please note that there are lots of other products and projects that are based on Tomcat (e.g. TomEE) and might also be affected.
As it reads in the security announcement:
This was initially reported as “User session are mixed up after internal exceptions” by a JetBrains employee:
It seems not yet entirely clear what triggers this potentially grave vulnerability in the NIO and NIO2 connectors. According to the reporter it was accompanied by several exceptions happening in the same time frame.
As it reads in the security announcement:
Tomcat uses the UTF-8 decoder of the late Apache Harmony project, that decoder has a not supported edge case (aka Bug), which can lead to an infinite loop while trying to decode UTF-8 encoded characters.
Lastly, the WebSocket client did not verify if the hostname in the TLS certificate and the actual hostname of the remote host matched.
If you are a user of Apache Tomcat it is recommended to subscribe to the official tomcat-announce mailinglist to get information about new releases and security vulnerabilities directly from the Tomcat team.
We recommend to update your Tomcat installations each time a new Tomcat patch release is announced.
03 Oct 2017 | Peter Stöckli
The Apache Tomcat team announced today that all Tomcat versions before 9.0.1 (Beta), 8.5.23, 8.0.47 and 7.0.82 contain a potentially dangerous remote code execution (RCE) vulnerability on all operating systems if the default servlet is configured with the parameter
readonly set to
false or the WebDAV servlet is enabled with the parameter
readonly set to
This configuration would allow any unauthenticated user to upload files (as used in WebDAV). It was discovered that the filter that prevents the uploading of JavaServer Pages (.jsp) can be circumvented. So JSPs can be uploaded, which then can be executed on the server.
Now since this feature is typically not wanted, most publicly exposed system won’t have
readonly set to
This security issue (CVE-2017-12617) was discovered after a similar vulnerability in Tomcat 7 on Windows CVE-2017-12615 has been fixed. Unfortunately it has been publicly disclosed in the Tomcat Bugtracker on the 20th of September.
Updating Tomcat to a version where the vulnerability is fixed is recommended in all cases.
(The setting could be enabled by accident or other vulnerable combinations could be discovered.)
Part of the original announcement:
CVE-2017-12617 Apache Tomcat Remote Code Execution via JSP Upload Severity: Important Versions Affected: Apache Tomcat 9.0.0.M1 to 9.0.0 Apache Tomcat 8.5.0 to 8.5.22 Apache Tomcat 8.0.0.RC1 to 8.0.46 Apache Tomcat 7.0.0 to 7.0.81 Description: When running with HTTP PUTs enabled (e.g. via setting the readonly initialisation parameter of the Default servlet to false) it was possible to upload a JSP file to the server via a specially crafted request. This JSP could then be requested and any code it contained would be executed by the server. Mitigation: Users of the affected versions should apply one of the following mitigations: - Upgrade to Apache Tomcat 9.0.1 or later - Upgrade to Apache Tomcat 8.5.23 or later - Upgrade to Apache Tomcat 8.0.47 or later - Upgrade to Apache Tomcat 7.0.82 or later Credit: This issue was first reported publicly followed by multiple reports to the Apache Tomcat Security Team. History: 2017-10-03 Original advisory
The publicly described exploit is as simple as sending a special crafted HTTP
PUT request with a JSP as payload to a Tomcat server.
The code is then executed when the newly uploaded JSP is accessed via an HTTP client (e.g. web browser):
The misconfiguration in the default servlet can be spotted by checking if the
web.xml of the default servlet contains an init-param like this (typically there are other init-params set):
Please note: that the misconfiguration could also take place in code or the configuration of the WebDAV servlet (if enabled).
The documentation of the default servlet talks about the read only param like this:
Is this context "read only", so HTTP commands like PUT and DELETE are rejected? [true]
Since this sentence does not mention the dangers of this param we suggested a change of said documentation.
Updating Tomcat to a version where the vulnerability is fixed (e.g. Tomcat 8.5.23) is recommended.
readonly init-param shouldn’t be set to
false. If this param is left to the default (
true) an attacker has not been able to upload files.
On this occasion it’s also a good idea to make sure that you don’t have the same vulnerability in custom
PUT implementations (also see: Unrestricted File Upload).
Additionally, it’s of course also possible to block
DELETE requests on the frontend server (e.g. on the Web Application Firewall (WAF)).
In our eyes it is almost always wrong to set
false and hopefully most publicly accessible Tomcat servers don’t have it set to false anyways.
If you are a user of Apache Tomcat it is recommended to subscribe to the tomcat-announce mailinglist to get information about new releases and security vulnerabilities directly from the Tomcat team.
On some sites on the Internet (e.g. on Stack Overflow) you find the information that you should set
false to make
your custom servlet accept
DELETE requests. That is simply wrong!
Updated the blog post to better point out that an upgrade to a fixed Tomcat version is (of course) recommended. Added the original announcement.
Extended Mitigation chapter, improved wording.
14 Aug 2017 | Peter Stöckli
tl;dr ViewStates in JSF are serialized Java objects. If the used JSF implementation in a web application is not configured to encrypt the ViewState the web application may have a serious
remote code execution (RCE) vulnerability. So it is important that the ViewState encryption is never disabled!
After we had a look at RCEs through misconfigured JSON libraries we started analyzing the ViewStates of JSF implementations. JavaServer Faces (JSF) is a User Interface (UI) technology for building web UIs with reusable components. JSF is mostly used for enterprise applications and a JSF implementation is typically used by a web application that runs on a Java application server like JBoss EAP or WebLogic Server. There are two well-known implementations of the JSF specification:
This blog post focuses on the two JSF 2.x implementations: Oracle Mojarra (Reference Implementation) and Apache MyFaces. Older implementations (JSF 1.x) are also likely to be affected by the vulnerabilities described in this post. (JSF 2.0.x was initially released in 2009, the current version is 2.3.x).
A difference between JSF and similar web technologies is that JSF makes use of ViewStates (in addition to sessions) to store the current state of the view (e.g. what parts of the view should currently be displayed). The ViewState can be stored on the
server or the
client. JSF ViewStates are typically automatically embedded into HTML forms as hidden field with the name
javax.faces.ViewState. They are sent back to the server if the form is submitted.
If the JSF ViewState is configured to sit on the
server the hidden
javax.faces.ViewState field contains an id that helps the server to retrieve the correct state. In the case of MyFaces that id is a serialized Java object!
If the JSF ViewState is configured to sit on the
client the hidden
javax.faces.ViewState field contains a serialized Java object that is at least Base64 encoded. You might have realized by now that this is a potential road to disaster! That might be one of the reasons why nowadays JSF ViewStates are encrypted and signed before being sent to the client.
Let’s assume we have a web application with a JSF based login page:
That login page has a ViewState that is neither encrypted nor signed. So when we look at its HTML source we see a hidden field containing the ViewState:
If you decode the above ViewState using Base64 you will notice that it contains a serialized Java object. This ViewState is sent back to the server via POST when the form is submitted (e.g. click on Login). Now before the ViewState is POSTed back to the server the attacker replaces the ViewState with his own malicious ViewState using a gadget that’s already on the server’s classpath (e.g.
InvokerTransformer from commons-collections-3.2.1.jar) or even a gadget that is not yet known to the public. With said malicious gadget placed in the ViewState the attacker specifies which commands he wants to run on the server. The flexibility of what an attacker can do is limited by the powers of the available gadgets on the classpath of the server. In case of the
InvokerTransformer the attacker can specify which command line commands should be executed on the server. The attacker in our example chose to start a calculator on the UI of our Linux based server.
After the attacker has sent his modified form back to the server the JSF implementation tries to deserialize the provided ViewState. Now even before the deserialization of the ViewState has ended the command is executed and the calculator is started on the server:
Everything happened before the JSF implementation could have a look at the ViewState and decide that it was no good. When the ViewState was found to be invalid typically an error is sent back to the client like “View expired”. But then it’s already too late. The attacker had access to the server and has run commands. (Most real-world attackers don’t start a calculator but they typically deploy a remote shell, which they then use to access the server.)
=> All in all this example demonstrates a very dangerous unauthenticated remote code execution (RCE) vulnerability.
(Almost the same attack scenario against JSF as depicted above was already outlined and demonstrated in the 2015 presentation (pages 65 to 67): Marshalling Pickles held by Frohoff and Lawrence.)
Now, what are the ingredients for a disaster?
Let’s have a look at those points in relation to the two JSF implementations.
As said before Oracle Mojarra is the JSF Reference Implementation (RI) but might not be known under that name. It might be known as Sun JSF RI, recognized with the java package name
com.sun.faces or with the ambiguous jar name
So here’s the thing: Mojarra did not encrypt and sign the client-side ViewState by default in most of the versions of 2.0.x and 2.1.x. It is important to note that a server-side ViewState is the default in both JSF implementations but a developer could easily switch the configuration to use a client-side viewstate by setting the
javax.faces.STATE_SAVING_METHOD param to
client. The param name does in no way give away that changing it to client introduces grave remote code execution vulnerabilities (e.g. a client-side viewstate might be used in clustered web applications).
Whilst client-side ViewState encryption is the default in Mojarra 2.2 and later versions it was not for the 2.0.x and 2.1.x branches. However, in May 2016 the Mojarra developers started backporting default client-side ViewState encryption to 2.0.x and 2.1.x when they realized that unencrypted ViewStates lead to RCE vulnerabilities.
When we analyzed the Mojarra libraries we noticed that Red Hat also releases Mojarra versions for the 2.1.x and 2.0.x branches, the latest being 2.1.29-jbossorg-1 and 2.0.4-b09-jbossorg-4. Since both releases were without default ViewState encryption we contacted Red Hat and they promptly created Bug 1479661 - JSF client side view state saving deserializes data in their bugtracker with following mitigation advice for the 2.1.x branch:
A vulnerable web application needs to have set javax.faces.STATE_SAVING_METHOD to 'client' to enable client-side view state saving. The default value on Enterprise Application Platform (EAP) 6.4.x is 'server'.
If javax.faces.STATE_SAVING_METHOD is set to 'client' a mitigation for this issue is to encrypt the view by setting com.sun.faces.ClientStateSavingPassword in the application web.xml:
Unfortunately, in some even older versions that mitigation approach does not work: according to this great StackOverflow answer in the JSF implementation documentation it was incorrectly documented that the param
com.sun.faces.ClientStateSavingPassword is used to change the Client State Saving Password, while the parameter up until 2.1.18 was accidentally called
ClientStateSavingPassword. So providing a Client State Saving Password as documented didn’t have an effect! In Mojarra 2.1.19 and later versions they changed the parameter name to the documented name
By default Mojarra nowadays uses
AES as encryption algorithm and
HMAC-SHA256 to authenticate the ViewState.
javax.faces.STATE_SAVING_METHOD setting of Mojarra is
server. A developer needs to manually change it to
client so that Mojarra becomes vulnerable to the above described attack scenario. If a serialized ViewState is sent to the server but Mojarra uses
server side ViewState saving it will not try to deserialize it (However, a
StringIndexOutOfBoundsException may occur).
When using Mojarra with a server-side ViewState nothing has to be done.
When using Mojarra < 2.2 and a client-side ViewState there are following possible mitigations:
For later Mojarra versions:
Apache MyFaces is the other big and widely used JSF implementation.
MyFaces does encrypt the ViewState by default, as stated in their Security configuration Wiki page:
Encryption is enabled by default. Note that encription must be used in production environments and disable it could only be valid on testing/development environments.
However, it is possible to disable ViewState encryption by setting the parameter
false. (Also it would be possible to use encryption but manually set an easy guessable password). By default the ViewState encryption secret changes with every server restart.
By default MyFaces uses
DES as encryption algorithm and
HMAC-SHA1 to authenticate the ViewState. It is possible and recommended to configure more recent algorithms like
javax.faces.STATE_SAVING_METHOD setting of MyFaces is
But: MyFaces does always deserialize the ViewState regardless of that setting. So it is of great importance to not disable encryption when using MyFaces!
(We created an issue in the MyFaces bug tracker: MYFACES-4133 Don’t deserialize the ViewState-ID if the state saving method is server, maybe this time the wish for more secure defaults will catch on.)
When using MyFaces make sure that encryption of the ViewState is not disabled (via
org.apache.myfaces.USE_ENCRYPTION) regardless if the ViewState is stored on the client or the server.
Most facts about JSF ViewStates and their dangers presented in this blog post are not exactly new but it seems they were never presented in such a condensed way. It showed once more that seemingly harmless configuration changes can lead to serious vulnerabilities.
=> One of the problems seems to be that there is not enough knowledge transfer between security researchers and developers who actually use and configure libraries that might be dangerous when configured in certain ways.
13 Jun 2017 | Peter Stöckli
tl;dr No, of course, you don’t want to create a vulnerable JSON API.
So when using Json.NET: Don’t use another TypeNameHandling setting than the default:
In May 2017 Moritz Bechler published his MarshalSec paper where he gives an in-depth look at remote code execution (RCE) through various Java Serialization/Marshaller libraries like Jackson and XStream. In the conclusion of the detailed paper, he mentions that this kind of exploitation is not limited to Java but might also be possible in the .NET world through the Json.NET library. Newtonsoft’s Json.NET is one of the most popular .NET Libraries and allows to deserialize JSON into .NET classes (C#, VB.NET).
So we had a look at Newtonsoft.Json and indeed found a way to create a web application that allows remote code execution via a JSON based REST API. For the rest of this post we will show you how to create such a simple vulnerable application and explain how the exploitation works. It is important to note that these kind of vulnerabilities in web applications are most of the time not vulnerabilities in the serializer libraries but configuration mistakes. The idea is of course to raise awareness with developers to prevent such flaws in real .NET web applications.
The following hypothetical ASP.NET Core sample application was tested with .NET Core 1.1. For other .NET framework versions slightly different JSONs might be necessary.
The key in making our application vulnerable for “Deserialization of untrusted data” is to enable type name handling in SerializerSettings of Json.NET. This tells Json.NET to write type information in the field “$type” of the resulting JSON and look at that field when deserializing.
In our sample application we set this SerializerSettings globally in the ConfigureServices method in Startup.cs:
Following TypeNameHandlings are vulnerable against this attack:
In fact the only kind that is not vulnerable is the default:
The official Json.NET TypeNameHandling documentation explicitly warns about this:
TypeNameHandling should be used with caution when your application deserializes JSON from an external source. Incoming types should be validated with a custom SerializationBinder when deserializing with a value other than None.
But as the MarshalSec paper points out: not all developers read the documentation of the libraries they’re using.
To offer a remote attack possibility in our web application we created a small REST API that allows POSTing a JSON object.
As you may have noticed we accept a body value from the type
Info, which is our own small dummy class:
To “use” our newly created vulnerability we simply POST a type-enhanced JSON to our web service:
Et voilà: we executed code on the server!
Wait… what? But how?
When sending a custom JSON to a REST service that is handled by a deserializer that has support for custom type name handling in combination with the
dynamic keyword the attacker can specify the type he’d like to have deserialized on the server.
So let’s have a look at the JSON we sent:
specifies the class
FileInfo from the namespace System.IO in the assembly System.IO.FileSystem.
The deserializer will instantiate a
FileInfo object by calling the public constructor
public FileInfo(String fileName) with the given fileName “rce-test.txt” (a sample file we created at the root of our insecure web app).
Json.NET prefers parameterless default constructors over one constructor with parameters, but since the default constructor of
private it uses the one with one parameter.
Afterwards it will set “IsReadOnly” to true. However, this does not simply set the “IsReadOnly” flag via reflection to true. What happens instead is that the deserializer calls the setter for IsReadOnly and the code of the setter is executed.
What happens when you call the IsReadOnly setter on a
FileInfo instance is that the file is actually set to read-only.
We see that indeed the read-only flag has been set on the rce-test.txt file on the server:
A small side effect of this vulnerable service implementation is that we also can check if a file exists on the server. If the file sent in the “fileName” field does not exist an exception is thrown when the setter for IsReadOnly is called and the server returns NotFound(404) to the caller.
To perform even more sinister work an attacker could search the .NET framework codebase or third party libraries for classes that execute code in the constructor and/or setters. The
FileInfo class here is just used as a very simple example.
When providing Json.NET based REST services always leave the default TypeNameHandling at
When other TypeNameHandling settings are used an attacker might be able to provide a type he wants the serializer to deserialize and as a result unwanted code could be executed on the server.
The described behavior is of course not unique to Json.NET but is also implemented by other libraries that support Serialization e.g. when using
They also presented new gadgets, which allow more sinister attacks than the one published in this blog post (the gadgets might not work with all JSON/.NET framework combinations):
System.Configuration.Install.AssemblyInstaller: "Execute payload on local assembly load"
System.Activities.Presentation.WorkflowDesigner: "Arbitrary XAML load"
System.Windows.ResourceDictionary: "Arbitrary XAML load"
System.Windows.Data.ObjectDataProvider: "Arbitrary Method Invocation"
In addition to their findings they had a look at .NET open source projects which made use of any of those different JSON libraries with type support and found several vulnerabilities:
24 Feb 2017 | Peter Stöckli
On the 23rd of February Tavis Ormandy of Google’s Project Zero made following security vulnerability accessible to the public: Cloudflare Reverse Proxies are Dumping Uninitialized Memory. The vulnerability affects many Cloudflare customers and especially their users. A vulnerable software component in Cloudflare’s reverse proxies led to the disclosure of Personally identifiable information (PII) of users around the world. Since Cloudflare reverse proxies are shared between customers, user information could emerge in a totally different place on the Internet.
The report describes how the security researchers at Google experienced the “cloudbleed” situation:
We fetched a few live samples, and we observed encryption keys, cookies, passwords, chunks of POST data and even HTTPS requests for other major cloudflare-hosted sites from other users. Once we understood what we were seeing and the implications, we immediately stopped and contacted cloudflare security.
The report contains redacted user information from the ride-sharing unicorn Uber, health tracking company FitBit and dating site OkCupid.
Let’s have a quick look at how Cloudflare works. Typically Cloudflare’s customers use their services for DDoS (Distributed Denial of Service) protection. Often the customers use DNS services provided by Cloudflare and/or their traffic is redirected via Cloudflare’s reverse proxies before the traffic is sent to the customers web server. From a user’s point of view: the user’s traffic to the reverse proxy is encrypted, where it’s decrypted and analyzed by Cloudflare’s algorithms.
Cloudflare has published a detailed report, where Cloudflare’s talented security guys describe the technical part of the vulnerability: Incident report on memory leak caused by Cloudflare parser bug.
They write that the earliest leaking could have started on the 22th September of 2016.
They also write:
The infosec team worked to identify URIs in search engine caches that had leaked memory and get them purged. With the help of Google, Yahoo, Bing and others, we found 770 unique URIs that had been cached and which contained leaked memory. Those 770 unique URIs covered 161 unique domains. The leaked memory has been purged with the help of the search engines. We also undertook other search expeditions looking for potentially leaked information on sites like Pastebin and did not find anything.
However users on Twitter reported that they still found cached web pages using Google or Bing.
One important point is that not necessarily a Cloudflare customer’s site was leaking information about their users, but a totally different site of another Cloudflare customer could have been leaking that user information.
Another important point is that the listed search engines are not the only ones collecting and storing information from websites in the Internet. Think of caches, web crawlers, archive sites, solutions that store the content of websites for legal reasons, the list goes on…
Even our newly developed web application security scanner called SecBot, that continuously scans web applications for vulnerabilities stores the HTTP responses of the requests. Since we’re still in the development phase, SecBot hasn’t yet tested a site hosted behind a Cloudflare Reverse Proxy. If that would have been the case the database of SecBot could contain sensitive data of Cloudflare customers. And so could many other crawlers in the world.
If you want to act proactively you can change your passwords on sites to be known using Cloudflare (however not all sites using Cloudflare services are affected). Many different websites will probably request you to change your password and revoke OAuth tokens in the next days. As said before the infosec guys working at Cloudflare are found to be competent and will hopefully find a solution that prevents such a huge issue from ever happening again.
26 May 2016 | Peter Stöckli
The Swiss governmental computer emergency response team (GovCERT.ch) has published a detailed technical report about the Advanced Persistent Threat (APT) that targeted RUAG. RUAG, best-known for RUAG Defence is originally a spin-off of the Swiss army and is fully owned by the Swiss state. Remarkable and applaudable is the fact that it was decided to share this kind of information. The motivation of the GovCERT is explained in the conclusion:
"[..] One of the most effective countermeasures from a victim’s perspective is the sharing of information about such attacks with other organizations, also crossing national borders. This is why we decided to write a public report about this incident, and this is why we strongly believe to share as much information as possible. If this done by any affected party, the price for the attacker raises, as he risks to be detected in every network he attacked in different countries. [..]"
The attack that lasted from an unknown date (assumed in 2014) to the the 3rd May of 2016 is introduced like this:
"The cyber attack is related to a long running campaign of the threat actor around Epic/Turla/Tavdig. The actor has not only infiltrated many governmental organizations in Europe, but also commercial companies in the private sector in the past decade. RUAG has been affected by this threat since at least September 2014. The actor group used malware that does not encompass any root kit technologies (even though the attackers have rootkits within their malware arsenal). An interesting part is the lateral movement, which has been done with a lot of patience. [..]"
The report goes into technical details and reveals interesting details of the inner workings and communication channels of the observed malware. The researches that disassembled the binaries analyzed the encryption algorithms and communication methods used.
E. g. According to page 20 of the report the malware asymmetrically encrypted the stolen data, encoded it with Base64 and put it into a server response like this:
This seems like a fairly uncharacteristically move for a host that does normally not act as a web server and should be detectable for an Application Firewall.
The report makes some generic recommendations that should help companies to prevent such attacks or at least reduce their impact and improve the forensic readiness in case something happens. Some of those countermeasure recommendations on the system level are:
- Consider using Applocker, a technique from Microsoft, which allows you to decide, based on GPOs (Group Policy Objects), which binaries are allowed to be executed, and under which paths. [..]
- Reduce the privileges a user has when surfing the web or doing normal office tasks. High privileges may only be used when doing system administration tasks.
- This actor, as well as many other actor groups, relies on the usage of “normal” tools for their lateral movement. The usage of such tools can be monitored. E.g. the start of a tool such as psexec.exe or dsquery.exe from within a normal user context should raise an alarm.
- Keep your systems up-to-date and reduce their attack surface as much as possible (e.g.: Do you really need to have Flash deployed on every system?)
- Use write blockers and write protection software for your USB/Firewire devices, or even disable them for all client devices
- Block the execution of macros, or require signed macros
Other areas of recommendations concern the Active Directory, the network, logging, system management and organizational aspects. Most of the recommendations sound straightforward and should already be in place in similar manner in secure environments of bigger companies. Interestingly the report does not reference ISO 27001, ISO 27002 or any other standards in the information security field, while its generic recommendations would align very well. Most likely the main focus of the authors was to give practical tips free of management lingo, reaching a broad, heterogeneous audience.
In general, the work that went into creating and publishing this report is appreciated and will hopefully have an impact.
03 May 2016 | Peter Stöckli
We slightly redesigned our website and started the Alphabot Security Blog where we will write about security vulnerabilities and development tips for web applications.