본문 바로가기

malware

Malware of the future checks if it's inside the Matrix

728x90

“What,” asked the speaker. “if Notepad behaved just like you would expect it to, but only for the first hour or so that you used it? What if it began to do different things after that?”

According to Giovanni Vigna, a professor at the University of California, Santa Barbara, and the head of the Center for CyberSecurity and Seclab there, such possum-like behaviour and long-term thinking represents the future of the malware arms race.

Speaking at IP Expo today, Prof. Vigna outlined scenarios in which an increasingly sophisticated and opaque breed of malicious executable will evolve to ‘mimic’ the behaviour patterns of benign software, in an attempt to avoid wasting its payload behaviour on a sandbox or virtualised environment.

Three thousand previously unidentified malware entities flood the network every day. Many are old ‘friends’ repackaged to generate hashes unfamiliar to the databases of BitDefender, Symantec and other anti-malware companies, and this guarantees them at least an hour in the wild, if not a whole ‘zero’ day.

But others are genuinely evolutionary. Instead of sprinting for a buffer overflow, some malware now demonstrates incredibly circumspect behaviour upon launch. The first thing the entity wants to know is if it is running in front of a real user and in a real system, and to this end it has developed an ever-growing map of tell-tale signs that it might not be in Kansas after all.

In general terms, evasive malware is looking for documented differences between a virtualised and bare-metal environment, and it is likely to find them, depending on the host VM, in a check of the system’s CPU features, in the BIOS, and in artefacts within the OS - particularly, in Windows systems, in the DLL listings and in registry sub-values:

On a more pragmatic level, it will look for hardware hooks indicating the connection of a keyboard and mouse, and most particularly it will seek to identify mouse movement as a sign that an actual end-user may be sitting in front of the malware on a ‘real’ computer. It will also check the colour of a background pixel, Mutex names, the names of hardware connected to the system and for details of the Windows Product ID.

The stakes are high – if the malware has got this far, with its hash unlisted in the popular security databases, it has everything to lose by disclosing its target behaviour in a virtual environment.

In 64-bit Windows systems, some more advanced malware uses Windows’ own trick of escaping 32-bit address space to make 64-bit system calls from 32-bit code – bypassing systems that are monitoring the 32-bit addresses of system calls.

Every system call is a gamble for the malware. Though the compiled binary is far harder to analyse, even when running, than its source code would be, it will still need a good excuse to begin looking up the list of its host system’s running processes – in reality seeking out the presence of known analysis tools that might be watching it. Prof. Vigna’s own Anubis malware analysis software is on the malware-writer’s ‘hit list’.

Vigna, CTO of security solutions company Lastline, has also found malware source code that specifically seeks out the user ‘Andy’ in a new environment, as this reflects the name of one of his team in earlier VM battles with malware authors.

Some of this paranoia is contextual – looking up system processes would likely be a red flag in a freeware text editor but merely a routine and expected environment check for a defragger, which would be looking for system elements that may prevent routine system housecleaning.

The challenge facing security researchers is time – there is nothing new in malware waiting out a specific period or awaiting a certain set of environmental conditions before acting. But this intelligent probing of the host environment is a phenomenon of recent years. If the malware in question cannot be convinced that it is in a worthwhile attack space, it may never act at all, and may therefore prove difficult to study, categorise or protect against.

The task facing Professor Vigna and his colleagues, considering the onslaught of new variants, is that of developing workflows to automate the identifying of ‘evasive’ malware within realistic time constraints. To this end Vigna proposes using inactivity or cycle-idling as a critical indicator, to avoid the suspect software timing out the analysis before it reveals its mission.

Malware ‘stalling’ techniques are not easy for analysis tools to evaluate, but some of the methods are known: obscure math instructions are one, and loops which keep the process in memory but achieve nothing are another:

As it is, flat evaluation of an unlaunched application is proving increasingly frustrating. In a recent study of Android apps, Vigna came across several cases where the intended malware had no malevolent code in it at download, but would later update itself across the network.

For Prof. Vigna, the real challenge may lie ahead, to a time when malware develops to a new level of disingenuousness. The professor’s own experiments with mimicry simulation have convinced him that genuine system calls and integrity of functionality will be incorporated into future malware configurations, making the identification of hostile intent even harder to evaluate in malicious programs. It hasn’t happened yet, so far as he can tell, but it seems to make sense.

Telescoping the situation, one can consider the popularisation and domestication of a new application that has hidden intents, but may not reveal them, or exhibit any suspicious behaviour, for a very long time.

Further reading:
Automatically Detecting Evasive Malware - Giovanni Vigna
BareCloud: Bare-metal Analysis-based Evasive Malware Detection
A fistful of red-pills: How to automatically generate procedures to detect CPU emulators; Usenix Workshop on Offensive Technologies (WOOT), 2009 [PDF]

728x90