Ken Shirriff’s blog: Mining Bitcoin with pencil and paper: 0.67 hashes per day: I decided to see how practical it would be to mine Bitcoin with pencil and paper. It turns out that the SHA-256 algorithm used for mining is pretty simple and can in fact be done by hand. Not surprisingly, the process is extremely slow compared to hardware mining and is entirely impractical. But performing the algorithm manually is a good way to understand exactly how it works.
Malicious SHA-1: systems using “custom” versions of SHA-1 may include backdoors exploitable by the designers. Such custom versions of cryptographic standards are typically found in proprietary systems as a way to personalize the cryptography for a given customer, while retaining the security guarantees of the original algorithm.
SHA-1 is a NIST standard designed by NSA in 1995 and used everywhere: in TLS, SSH, IPsec, etc. as part of encryption, signature, message authentication, or key derivation schemes.
SHA-1 produces 160-bit hash values. Therefore a generic attack requires approximately 280 evaluations of SHA-1 to find a collision, as per the birthday paradox. Such a “birthday attack” works on any reasonable hash function regardless of its strength. Cryptographers thus say that a hash function is “broken” if one finds an attack substantially faster than the birthday attack.
According to this definition, SHA-1 is broken, since public research described collision attacks more than a thousand times faster than the birthday attack. However,
- the actual complexity of collision attacks on SHA-1 is unclear, but seems to be greater than 260
- an actual collision for the original SHA-1 has yet to be published (found?)
The known collision attacks are differential attacks. These introduce differences in the first message block—SHA-1 processes message by compressing iteratively blocks of 512 bits—and control the propagation of the differences thereby injected in SHA-1’s internal state in order to “correct” the disturbances thanks to a second message block and thus finally obtain a collision, thanks to an internal state now free of any difference.
To construct malicious SHA-1 versions, we had to find
- a differential characteristic of high enough probability (that is, a pattern of differences propagation that leads to a collision)
- a method to efficiently find messages and constants following this characteristic
To find a differential characteristic, we build on previous research and sought a characteristic by linearization that minimizes the cost of construction a malicious version of SHA-1.
Volatility 2.3 and FireEye’s diskless, memory: If you needed more any more evidence as to why your DFIR practice should evolve to a heavy focus on memory analysis, let me offer you some real impetus.
FireEye’s Operation Ephemeral Hydra: IE Zero-Day Linked to DeputyDog Uses Diskless Method, posted 10 NOV 2013 is specific to an attack that “loaded the payload directly into memory without first writing to disk.” As such, this “will further complicate network defenders’ ability to triage compromised systems, using traditional forensics methods.” Again, what is described is a malware sample (payload) that ” does not write itself to disk, leaving little to no artifacts that can be used to identify infected endpoints.” This FireEye analysis is obviously getting its share of attention, but folks are likely wondering “how the hell are we supposed to detect that on compromised systems?”
Question: Why does Volatility rule?
Answer: Because we don’t need no stinking file system artifacts.
In preparation for a Memory Analysis with Volatility presentation I gave at SecureWorld Expo Seattle last evening, I had grabbed the malware sample described in great length by FireEye from VirusShare (MD5 104130d666ab3f640255140007f0b12d), executed it on a Windows 7 32-bit virtual machine, used DumpIt to grab memory, and imported the memory image to my SIFT 2.14 VM running Volatility 2.3 (had to upgrade as 2.2 is native to SIFT 2.14).
I had intended to simply use a very contemporary issue (3 days old) to highlight some of the features new to the just released stable Volatility 2.3, but what resulted was the realization that “hey, this is basically one of the only ways to analyze this sort of malware.”
So here’s the breakdown.
The Evolution of Protected Processes Part 1: Pass: The Evolution of Protected Processes Part 1: Pass-the-Hash Mitigations in Windows 8.1
It was more than six years ago that I first posted on the concept of protected processes, making my opinion of this poorly thought-out DRM scheme clear in the title alone: “Why Protected Processes Are A Bad Idea”. It appears that Microsoft took a long, hard look at the mechanism (granted, an impenetrable user-mode process can have interesting security benefits — if we can get DRM out of the picture), creating a new class of process yet again: the Protected Process Light, sometimes abbreviated PPL in the kernel.
Unlike its “heavy” brother, the protected process light actually serves as a type of security boundary, bringing in three useful mitigations and security enhancements to the Windows platform. Over the next three or four blog posts, we’ll see how each of these enhancements is implemented, starting this week with Pass-the-Hash (PTH) Mitigation.
We’ll talk about LSASS’ role in the Windows security model, followed by the technical details behind the new PPL model. And since it’s hard to cover any new security advancement without delving in at least a few other inter-related internals areas, we’ll also talk a little bit about Secure Boot and protected variables. Perhaps most importantly, we’ll also see how to actually enable the PtH mitigation, as it is currently disabled by default on non-RT Windows versions.
Targeted Internet Traffic Misdirection: For years, we’ve observed that there was potential for someone to weaponize the classic Pakistan-and-Youtube style route hijack. Why settle for simple denial of service, when you can instead steal a victim’s traffic, take a few milliseconds to inspect or modify it, and then pass it along to the intended recipient?
This year, that potential has become reality. We have actually observed live Man-In-the-Middle (MITM) hijacks on more than 60 days so far this year. About 1,500 individual IP blocks have been hijacked, in events lasting from minutes to days, by attackers working from various countries.
Simple BGP alarming is not sufficient to distinguish MITM from a generic route hijacking or fat-finger routing mistake; you have to follow up with active path measurements while the attack is underway in order to verify that traffic is being simultaneously diverted and then redelivered to the victim. We’ve done that here.
Windows 8 File History Analysis: File History is a new backup service introduced in Windows 8. By default this feature is off and to turn it on, user has to select a backup location – either a network drive or external storage media. Thus, it does not allow user to use the same disk. File History backs up files of the Libraries, Desktop, Contacts and Favorites folders. There is an option to exclude any folder(s) that users don’t want to backup. Notice that File History is unable to backup your folders synced with cloud storage service(s). According to Microsoft, “File History doesn’t back up files on your PC that you have synced with SkyDrive, even if they’re in folders that File History backs up.” Once turned on, File History automatically backs up the folders after every hour by default; however this interval can be changed easily in advanced settings. In addition, at any time, user can manually run the service. File History appears as fhsvc in the Task Manager and some associated dlls are fhcfg.dll, fhcpl.dll and fhsvcctl.dll.
Windows Systems and Artifacts in Digital Forensics: Part III: Prefetch Files: In this article, I’m going to focus on prefetch files, specifically, their characteristics, structure, points of interest in terms of forensic importance, uses, configuration, forensic value and metadata.
For part one of the series, which discusses the Windows Registry, please visit: http://resources.infosecinstitute.com/windows-systems-and-artifacts-in-digital-forensics-part-i-registry/
For part two of the series, which discusses event logs, deleted data, computer sleep and the erasure of artifacts in Windows, please visit: http://resources.infosecinstitute.com/windows-systems-and-artifacts-in-digital-forensics-part-ii/
Windows Prefetch files first appeared in Windows XP, and their purpose is to boost the startup process of launched applications.
OS X Hardening: Securing a Large Global Mac Fleet: OS X security is evolving: defenses are improving with each OS release but the days of “Macs don’t get malware” are gone. Recent attacks against the Java Web plugin have kindled a lot of interest in hardening and managing Macs. So how does Google go about defending a large global Mac fleet? Greg will discuss various hardening tweaks and a range of OS X defensive technologies including XProtect, Gatekeeper, Filevault 2, sandboxing, auditd, and mitigations for Java and Flash vulns.
A former pentester, incident responder, and forensic analyst, Greg Castle has been responsible for the security of Google’s OS X fleet for a couple of years, working closely with the Google MacOps team to harden and protect Google’s global Mac fleet. He is now working in Google’s incident response team on the GRR Rapid Response project: Google’s open source incident response framework.
DetecTor.io: DetecTor is an open source project to implement client side SSL/TLS MITM detection, compromised CA detection and server impersonation detection, by making use of the Tor network.
+Justin Case or +Dan Rosenberg’s exploits).
PS. Working on this has me absolutely frightened of all the traffic coming out of my device!