‫ اخبار

صفحات: «« « ... 8 9 10 11 12 »
10 crazy IT security tricks that actually work
We offer 10 security ideas that have been -and in many cases still are- shunned as too offbeat to work but that function quite effectively in helping secure the company's IT assets.
1: Renaming admins
Renaming privileged accounts to something less obvious than "administrator" is often slammed as a wasteful, "security by obscurity" defense. However, this simple security strategy works. If the attacker hasn't already made it inside your network or host, there's little reason to believe they'll be able to readily discern the new names for your privileged accounts. If they don't know the names, they can't mount a successful password-guessing campaign against them.
Never in the history of automated malware has an attack attempted to use anything but built-in account names. By renaming your privileged accounts, you defeat hackers and malware in one step. Plus, it's easier to monitor and alert on log-on attempts to the original privileged account names when they're no longer in use.
2: Getting rid of admins
Another recommendation is to get rid of all wholesale privileged accounts: administrator, domain admin, enterprise admin, and every other account and group that has built-in, widespread, privileged permissions by default.
When this is suggested, most network administrators laugh and protest, but Microsoft followed this recommendation, disabling local Administrator accounts by default on every version of Windows starting with Vista/Server 2008 and later. Lo and behold, hundreds of millions of computers later, the world hasn't come crashing down.
True, Windows still allows you to create an alternate Administrator account, but today's most aggressive computer security defenders recommend getting rid of all built-in privileged accounts, at least full-time.
3: Honeypots
A honeypot is any computer asset that is set up solely to be attacked. Honeypots have no production value. They sit and wait, and they are monitored. When a hacker or malware touches them, they send an alert to an admin so that the touch can be investigated. They provide low noise and high value.
The shops that use honeypots get notified quickly of active attacks. Still many people are typically incredulous they’re offered honeypots. But sometimes the best thing you can do is to try one.
4: Using nondefault ports
Another technique for minimizing security risk is to install services on nondefault ports. Like renaming privileged accounts, this security-by-obscurity tactic goes gangbusters. When zero-day, remote buffer overflow threats become weaponized by worms, computer viruses, and so on, they always -- and only -- go for the default ports. This is the case for SQL injection surfers, HTTP worms, SSH discoverers, and any other common remote advertising port.
Critics of this method of defense say it's easy for a hacker to find where the default port has been moved, and this is true. All it takes is a port scanner or an application fingerprinter to identify the app running on the nondefault port. In reality, most attacks are automated using malware, which as stated, only go for default ports, and most hackers don't bother to look for nondefault ports.
5: Installing to custom directories
Another security-by-obscurity defense is to install applications to nondefault directories.
This one doesn't work as well as it used to, given that most attacks happen at the application file level today, but it still has value. Like the previous security-by-obscurity recommendations, installing applications to custom directories reduces risk -automated malware almost never looks anywhere but the default directories. If malware is able to exploit your system or application, it will try to manipulate the system or application by looking for default directories. Install your OS or application to a nonstandard directory and you screw up its coding.
6: Tarpits
Worms readily replicate to any system that matches their exploit capabilities. Trapits work by answering connection attempts for addresses not already assigned to legitimate machines. It would then answer and tell the worm to connect, then spend the rest of the time trying to slow down the worm, using various TCP protocol tricks: long timeouts, multiple retransmissions, and so on.
Today, many networks (and honeypots) have tarpit functionality, which answers for any nonvalid connection attempt. When penetration-test these networks, attacks and network sweep scanning attacks slow to a crawl- they're unusable, which is exactly the purpose.
7: Network traffic flow analysis
With foreign hackers abounding, one of the best ways to discover massive data theft is through network traffic flow analysis. Free and commercial software is available to map your network flows and establish baselines for what should be going where. That way, if you see hundreds of gigabytes of data suddenly and unexpectedly heading offshore, you can investigate. Most of the APT attacks would have been recognized months earlier if the victim had an idea of what data should have been going where and when.
8: Screensavers
Password-protected screensavers are a simple technique for minimizing security risk. If the computing device is idle for too long, a screensaver requiring a password kicks in. Long criticized by users who considered them nuisances to their legitimate work, they're now a staple on every computing device, from laptops to slates to mobile phones.
9: Disabling Internet browsing on servers
Most computer risk is incurred by users' actions on the Internet. Organizations that disable Internet browsing or all Internet access on servers that don't need the connections significantly reduce that server's risk to maliciousness. You don't want bored admins picking up their email and posting to social networking sites while they're waiting for a patch to download. Instead, block what isn't needed. For companies using Windows servers, consider disabling UAC (User Account Control) because the risk to the desktop that UAC minimizes isn't there. UAC can cause some security issues, so disabling it while maintaining strong security is a boon for many organizations.
10: Security-minded development
Any organization producing custom code should integrate security practices into its development process -- ensuring that code security will be reviewed and built in from day one in any coding project. Doing so absolutely will reduce the risk of exploitation in your environment.
This practice, sometimes known as SDL (Security Development Lifecycle), differs from educator to educator, but often includes the following tenets: use of secure programming languages; avoidance of knowingly insecure programming functions; code review; penetration testing; and a laundry list of other best practices aimed at reducing the likelihood of producing security bug-ridden code.
18 مرداد 1393 برچسب‌ها: مقالات
Cyber Forensics - Computers
ID :IRCAR201206144
Date: 2012-06-17
Technology has taken the world by storm in recent decades; the advent of the computer has completely revolutionized the way humans live, work and play. Particularly, computers have affected businesses in numerous ways, allowing them to run more efficiently. However, there is a dark side to computers, where individuals use them to carry out malicious assaults. These assaults range from fraud and identity theft to hacking, embezzlement and a wide array of other activities. When these individuals are caught, specialists are called upon to seize and gather information from the computers used in crimes. Computer forensics is the science of locating; extracting and analyzing types of data from different devices, which specialists then interpret to serve as legal evidence.
Computer crimes have been happening for nearly 50 years, since computers have been used in production. Evidence can be derived from computers and then used in court against suspected individuals. Initially, judges accepted the computer-derived evidence as no different from other forms of evidence. However, as data became more ambiguous with the advancement of computers, computer-derived evidence was not considered as reliable. Therefore, the government has stepped in and addressed some of these issues. It is important to note that evidence gathered from computers is subject to the same standards as evidence gathered from any other type of crime scene. Computer evidence is like any other evidence; it must be authentic, accurate, complete, convincing to juries, and in conformity with common law and legislative rules (admissible). Thus, the evidence gathered from suspected computer-related crimes must conform to the same standards as other evidence to be credible.
Computer-related Crimes
Since computers are everywhere and have virtually penetrated all industries, computer forensics can be helpful when a computer crime has been committed. Criminal prosecutors use computer evidence in a variety of ways for various types of crimes where incriminating documents or files can be found. For example, in instances of homicide, financial fraud, drug and embezzlement record keeping, and child pornography, prosecutors can hire computer forensics specialists to gather data that can be used in court. Insurance agencies have the ability to mitigate costs if insurance fraud has taken place (e.g., computer evidence that pertains to the possibility of fraud in accident, arson or worker's compensation cases). Civil litigations can use personal and business records found on computers and various media that could possibly bear on discrimination, divorce or harassment cases. Corporations sometimes hire computer forensics specialists to gather evidence when certain threatening issues arise, such as the leak of internal and confidential information, embezzlement, theft, or unlawful access to internal computers. Employees may also hire specialists to build a case against a particular corporation. For example, an employee may try to gather evidence to support a claim of age or race discrimination, or wrongful termination. Should incriminating evidence be discovered from any of the instances mentioned above, it can be used against the accused party in court.
Computer criminals can infiltrate systems on various platforms and commit a wide array of crimes. Typically, the systems that the criminals attempt to penetrate are protected with some type of security device to inhibit access. Some of these crimes include hacking web sites for bank account information, credit card information and personal identification, or stealing trade secrets from a company or government institution. For virtually any crime that is committed using a computer in some form, forensics specialists can be called upon to gather evidence against the accused individuals.
Criminals can use computers in two ways to carry out their activities. First, they may utilize the computer as a repository, also known as a database, to house the information they have acquired. For example, if a criminal is collecting credit card or personal identification information, he/she might create flat files, such as a text file, to copy and record the retrieved information for later use. The criminal can also create a database if he/she has a large list of information to easily run queries against to extract the type of information desired.
Criminals also use computers as a tool to commit crimes. They utilize their ability to connect to the Internet and various other types of networks. The computer simply needs a modem or Ethernet card to connect. The criminal may then connect to bank networks, home networks, office networks or virtual private networks (VPNs). The individual can utilize a number of tools to gain access to these networks and their data. The criminal might also use ghost terminals, which are machines not owned by the individual but used to carry out unlawful activities. For example, a hacker may connect to a computer that he/she hacked on a university campus, and then launch attacks from that computer and possibly store data on it. Agents should consider the possibility that the computer user has stored valuable information at some remote location. Specialists will need to survey and assess various avenues during an investigation, even those that are not immediately obvious at the crime scene.
Computer forensics overview by Fredrick Gallegos, CISA, CDE, CGFM
18 مرداد 1393 برچسب‌ها: مقالات
To Whitelist or To Not Whitelist
Date: 2012-06-20
There are some key concerns about using whitelisting in your organization to control which applications users can use. In my opinion, whitelisting can be a very powerful tool to help reduce the overall attack surface within your organization. That is, to have the ability to control, one by one, which applications can run and which can’t. However, some issues arise when you start to put the rubber to the pavement in your configuration and implementation of a whitelisting solution. If you can overcome the hurdles that come with deploying a whitelisting solution, I suggest you implement it as soon as possible. If you can’t overcome the hurdles, there are some other settings that I always suggest along with whitelisting that I think should be done at a minimum.
Whitelisting in a Nutshell
Whitelisting (and blacklisting) is a way to create a master list of applications that you want to allow and deny within your organization. Let’s say for example that you are an accounting firm and you run very few applications. The whitelist that you create might just be:
  • Word
  • Excel
  • Quickbooks
  • Internet Explorer
However, you know that there are some other applications that you do not want your employees to run, so you also create a denial list, referred to as a blacklist. That might include
  • Cain
  • Dumpsec
  • Ldp
From this example, it is clear what the employees can run and what they can’t run.
Hurdles with Whitelisting
Looking at our simple accounting firm example, we can start to create a list of potential hurdles for our whitelisting solution, even with such a small organization and set of applications.
Missed Applications
Let’s say for example, that the owner of the accounting firm runs a special application to check for stock quotes. That application is called AcmeStock. When you roll out your whitelisting solution based on the above lists, you have forgotten to include the AcmeStock and that application no longer works for the CEO. After rolling out your whitelisting solution, you find that many applications are missed, as each user seems to run “odd, non-disclosed” applications which are now failing and not allowing them to work fully.
Defined, but Not Desired Applications
In a typical whitelisting solution, you will not be able to just list the applications individually due to the fact that there are so many applications that are located in Program Files and System32 that need to be included as part of routine operating system functionality. Thus, you will need to include these folders in your whitelist. When you do that, someone might place a malicious application in one of the folders, which allows them to run the application.
Missing Malicious Applications
There are thousands, if not tens of thousands, of malicious applications that you can download from the Internet. Not to mention home grown malicious applications that can be run on an endpoint. There are very few blacklists that can be created to catch them all. Thus, solving this hurdle becomes difficult and somewhat of a moving target.
Whitelisting Does Not Elevate Applications
If the user is running on the endpoint as a standard user (no local administrator privileges), the whitelist does not elevate them to run applications that require local administrator, like Quickbooks, which in our case will stop Quickbooks from running. Granting the user local administrator privileges to run one application causes a significant attack surface (both to the endpoint and outward from the endpoint), so granting local administrator privileges is not a solution.
Overcoming the Hurdles
As you can see, there are many hurdles to overcome with a whitelisting implementation. Many of the more sophisticated whitelisting solutions have answers to many of these hurdles, but not all.
In order to solve the issue of missed applications, most solutions have some form of application monitoring and reporting service. This produces a list of all applications that are being used on all endpoints, so even seldom used applications can be caught and added to the whitelist.
As for the hurdle with undefined or missed applications, this is a difficult issue to tackle. Most solutions require you to add all of the not desired applications to a blacklist. In my opinion, if you have a least privilege, standard user, endpoint, the applications that might be run which are not defined on the blacklist will not be very useful in an attack or security scenario, so the risk is minimal to have every application listed on the blacklist.
As for the elevation of application issue, I have yet to see a whitelisting solution that includes a technology to handle elevation. Elevating of applications for endpoints is a very common dilemma, which until about 6 years ago was not truly solved. Working with Windows security for the past 10 years, I find that solutions like PowerBroker from BeyondTrust provide the most robust suite of solutions for privilege management and even whitelisting with the newest release of PowerBroker Windows Desktops 5.2.
If you don’t Whitelist, What Is Minimum Endpoint Security?
Due to the overhead for gathering and deploying whitelists and blacklists, many organizations stay away from implementing a whitelist solution to protect endpoints and networks. If you fall into this category, my suggestion for minimum configurations on the endpoint is to use a privilege management solution in combination with an anti-virus solution.
What the privilege management solution provides is a way to force the user to be a standard user, which will negate them from installing and running any application that requires local administrator privileges, except for what you configure them to run. Unlike a whitelist/blacklist solution, privilege management solutions, like PowerBroker, only need to have a list of applications that need elevation, as applications not listed, which require elevation, simply will fail by default.
What the privilege management solution lacks when implemented alone is the ability to deny applications which can be run by a standard user and are not desired. However, creating a list of just those applications which require elevation, compared to the multiple lists of allowed and denied applications, is a small fraction.
18 مرداد 1393 برچسب‌ها: مقالات
Understanding Man-In-The-Middle Attacks -  SSL Hijacking (part 4)
Date: 20-05-2012
In this article we are going to examine SSL spoofing, which is inherently one of the most potent MITM attacks because it allows for exploitation of services that people assume to be secure. I will begin by discussing some theory behind SSL connections and what makes them secure, and then follow by showing how that can be exploited. As always, the last section of the article is reserved for detection and prevention tips.
Secure Socket Layers (SSL), or Transport Layer Security (TLS) in its more modern implementation, are protocols designed to provide security for network communication by means of encryption. This protocol is most commonly associated with other protocols to provide a secure implementation of the service that protocol provides. Examples of this include SMTPS, IMAPS, and most commonly HTTPS. The ultimate goal is to create secure channels over insecure networks.
In this article we will focus on attacking SSL over HTTP, known as HTTPS, because it is the most common use of SSL. You may not realize it but you probably use HTTPS daily. Most popular e-mail services and online banking applications rely on HTTPS to ensure that communications between your web browser and their servers in encrypted. If it weren’t for this technology then anybody with a packet sniffer on your network could intercept usernames, passwords, and anything else that would normally be hidden.
The process used by HTTPS to ensure data is secure centers around the distribution of certificates between the server, the client, and a trusted third party. As an example let’s say that a user is trying to connect to a Gmail e-mail account. This involves a few distinct steps, which are briefly simplified in Figure 1.

Figure 1: The HTTPS Communication Process
The process outlined in Figure 1 is by no means detailed, but basically works out as follows:
  1. The client browser connects to http://mail.google.com on port 80 using HTTP.
  2. The server redirects the client HTTPS version of this site using an HTTP code 302 redirect.
  3. The client connects to https://mail.google.com on port 443.
  4. The server provides a certificate to the client containing its digital signature. This certificate is used to verify the identity of the site.
  5. The client takes this certificate and verifies it against its list of trusted certificate authorities.
  6. Encrypted communication ensues.
If the certificate validation process fails then that means the website has failed to verify its identity. At that point the user is typically presented with a certificate validation error and they can choose to proceed at their own risk, because they may or may not actually be communicating with the website they think they are talking to.
Defeating HTTPS
This process was considered highly secure up until several years ago when an attack was published that allowed for successful hijacking of the communication process.
Moxie Marlinspike, a well known security researcher hypothesized that in most cases, SSL is never encountered directly. That is, most of the time an SSL connection is initiated through HTTPS it is because someone was redirected to an HTTPS via an HTTP 302 response code or they click on a link that directs them to an HTTPS site, such as a login button.
The process is fairly straightforward and is reminiscent of some of the attacks we’ve completed in previous articles. It is outlined in Figure 2.

Figure 2: Hijacking HTTPS Communication
The process outlined in Figure 2 works like this:
  1. Traffic between the client and web server is intercepted.
  2. When an HTTPS URL is encountered sslstrip replaces it with an HTTP link and keeps a mapping of the changes.
  3. The attacking machine supplies certificates to the web server and impersonates the client.
  4. Traffic is received back from the secure website and provided back to the client.
The process works quite well and as far as the server is concerned it is still receiving the SSL traffic it wants to, it doesn’t know the difference. The only visible difference in the user experience is that the traffic will not be flagged as HTTPS in the browser, so a cognizant user will be able to notice that something is amiss.
Defending Against SSL Hijacking
As discussed previously, SSL hijacking in this manner is virtually undetectable from there server side of the equation because as far as the server is concerned this is just normal communication with a client. It has no idea that it is communicating to a client by proxy. Luckily, there are a few things that can be done from the client’s perspective to detect and prevent these types of attacks.
  • Ensure Secure Connections Use HTTPS - When you perform the attack described here it strips the secure aspect of the connection away, which is visible in the browser. This means that if you log into your online banking and notice that it is just a standard HTTP connection there is a good chance something is wrong. Whatever browser you choose to use, you should ensure you know how to distinguish secure connections from insecure ones.
  • Save Online Banking for Home - The chance of somebody intercepting your traffic on your home network is much less than on your work network. This isn’t because your home computer is more secure (let’s face it, its probably less secure), but the simple matter of fact is that if you only have one or two computers at home, the most you have to worry about in terms of session hijacking is if your 14 year old son starts watching hacking videos on YouTube. On a corporate network you don’t know what is going on down the hall or in the branch office 200 miles away, so the potential attack sources multiply. One of the biggest targets for session hijacking is online banking, but this principal applies to anything.
  • Secure your internal machines - Not to beat a dead horse, but once again, attacks like these are most commonly executed from inside the network. If your network devices are secure then there is less of a chance of those compromised hosts being used to launch a session hijacking attack.
Wrap Up
This form of MITM attack is one of the deadliest because it takes what we think is a secure connection and makes it completely insecure. If you consider how many secure sites you visit each day and then consider the potential impact if all of those connections were insecure and that data fell into the wrong hands then you will truly understand the potential impact this could have on you or your organization.
Related Link:
Understanding Man-In-The-Middle Attacks - Session Hijacking (Part 3)
18 مرداد 1393 برچسب‌ها: مقالات
Understanding Man-In-The-Middle Attacks - Session Hijacking (Part 3)
Date: 19-05-2012
In the first two articles of this series on man-in-the-middle attacks we examined ARP cache poisoning and DNS spoofing. As we have demonstrated with those examples, MITM attacks are incredibly effective and increasingly hard to detect. In the third part of this article we will examine session hijacking, which is no different. As with the previous two articles I will describe the theory behind session hijacking, demonstrate the technique in practice, and discuss detection and prevention tips.
Session Hijacking
The term session hijacking is thrown around frequently and encompasses a variety of different attacks. In general, any attack that involves the exploitation of a session between devices is session hijacking. When we refer to a session, we are talking about a connection between devices in which there is state. When we talk about sessions theoretically it’s a bit confusing, so it may help to think of a session in a more practical sense.
In this article we will be talking about session hijacking through cookie stealing, which involves HTTP sessions. If you think about some of the common websites you visit that require login credentials, those are great examples of session-oriented connections. You must be authenticated by the website with your username and password to formally set up the session, the website maintains some form of session tracking to ensure you are still logged in and are allowed to access resources (often done with a cookie), and when the session is ending the credentials are cleared and the session ends. This is a very specific example of a session and even though we do not always realize it, sessions are occurring constantly and most communications rely on some form of session or state-based activity.

Figure 1: A normal session
As we have seen in previous attacks, nothing that goes across the network is safe and session data is no different. The principle behind most forms of session hijacking is that if you can intercept certain portions of the session establishment, you can use that data to impersonate one of the parties involved in the communication so that you may access session information. In the case of our earlier example, this means that if we were to capture the cookie that is used to maintain the session state between your browser and the website you are logging into, we could present that cookie to the web server and impersonate your connection.

Figure 2: Session Hijacking

Defending Against Session Hijacking
There are many different forms of session hijacking so the defenses for them can vary. Just like the other MITM attacks we’ve evaluated, session hijacking is difficult to detect and even more difficult to defend against because it’s a mostly passive attack. Unless the malicious user performs some type of obvious action when he accesses the session being hijacked, you may never know that they were there. Here are a few things you can do to better defend against session hijacking:
  • Save Online Banking for Home - The chance of somebody intercepting your traffic on your home network is much less than on your work network. This isn’t because your home computer is more secure (let’s face it, its probably less secure), but the simple matter of fact is that if you only have one or two computers at home, the most you have to worry about in terms of session hijacking is if your 14 year old son starts watching hacking videos on YouTube. On a corporate network you don’t know what is going on down the hall or in the branch office 200 miles away, so the potential attack sources multiply. One of the biggest targets for session hijacking is online banking.
  • Be Cognizant - Smart attackers will not leave any evidence that they have been in one of your secure accounts but even the most seasoned hackers make mistakes. Being aware when you are logged into session-based services can help you determine if somebody else is walking in your shadow. Keep an eye out for things that seem out of place, and pay attention to “Last Logon Time” fields to ensure everything matches up.
  • Secure your internal machines - Once again, attacks like these are most commonly executed from inside the network. If your network devices are secure then there is less of a chance of those compromised hosts being used to launch a session hijacking attack.
Wrap Up
We have now covered three very lethal MITM attack types which could all have very grave consequences if successfully carried out against a victim. Using session hijacking someone with malicious intentions could access a user’s online banking, e-mail, or even a sensitive intranet application.
Related Link:
18 مرداد 1393 برچسب‌ها: مقالات
Understanding Man-In-The-Middle Attacks – DNS Spoofing (Part2)
Date: 2012-04-18
In the first installment of this series we reviewed normal ARP communication and how the ARP cache of a device can be poisoned in order to redirect machines network traffic through a another machine with possible malicious intent. This seemingly advanced man-in-the-middle (MITM) attack known as ARP Cache Poisoning is done easily with the right software. In this article we will discuss a similar type of MITM attack called DNS Spoofing.
DNS Spoofing
DNS spoofing is a MITM technique used to supply false DNS information to a host so that when they attempt to browse, for example, www.bankofamerica.com at the IP address XXX.XX.XX.XX they are actually sent to a fake www.bankofamerica.com residing at IP address YYY.YY.YY.YY which an attacker has created in order to steal online banking credentials and account information from unsuspecting users. This is actually done quite easily and here we will see how it works, how it is done, and how to defend against it.
Normal DNS Communication
The Domain Naming System (DNS) protocol is what some consider one of the most important protocols in use by the Internet. In a nutshell, whenever you type in a web address such as http://www.google.com into your browser, a DNS request is made to a DNS server in order to find out what IP address that name resolves to. This is because routers and the devices that interconnect the Internet do not understand google.com, they only understand addresses such as
A DNS server itself works by storing a database of entries (called resource records) of IP address to DNS name mappings, communicating those resource records to clients, and communicating those resource records to other DNS servers. The architecture of DNS servers throughout enterprises and the Internet is something that can be a bit complicated. As a matter of fact, there are whole books dedicated to DNS architecture. We will not cover architectural aspects or even all of the different types of DNS traffic, but we will look at a basic DNS transaction, seen in Figure 1.

Figure 1: A DNS Query and Response
DNS functions in a query/response type format. A client wishing to resolve a DNS name to an IP address sends a query to a DNS server, and the server sends the requested information in its response. From the clients’ perspective, the only two packets that are seen are this query and response.
This scenario gets a slight bit more complex when you consider DNS recursion. Due to the hierarchical nature of the DNS structure of the Internet, DNS servers need the ability to communicate with each other in order to locate answers for the queries submitted by clients. After all, it might be fair to expect our internal DNS server to know the name to IP address mapping of our local intranet server, but we can’t expect it to know the IP address correlated with Google or Dell. This is where recursion comes into play. Recursion is when one DNS server queries another DNS server on behalf of a client who has made a request. Basically, this turns a DNS server into a client itself, seen in Figure 3.

Figure 2: A DNS Query and Response Using Recursion
Spoofing DNS
There is definitely more than one method available for performing DNS spoofing. We will be using a technique called DNS ID spoofing. Every DNS query that is sent out over the network contains a uniquely generated identification number that’s purpose is to identify queries and responses and tie them together. This means that if our attacking computer can intercept a DNS query sent out from a target device, all we have to do is create a fake packet that contains that identification number in order for that packet to be accepted by that target.
This process will complete in two steps. First, we will ARP cache poison the target device to reroute its traffic through our attacking host so that we can intercept the DNS request, and then we will actually send the spoofed packet. The goal of this scenario is to get users on the target network to visit our malicious website rather than the website they are attempting to access. A depiction of this attack is seen in Figure 3.

Figure 3: The DNS Spoofing Attack Using the DNS ID Spoofing Method
Defending Against DNS Spoofing
DNS spoofing is difficult to defend against due to the attacks being mostly passive by nature. Typically, you will never know your DNS is being spoofed until it has happened. What you get is a webpage that is different than what you are expecting. In very targeted attacks it is very possible that you may never know that you have been tricked into enter your credentials into a false site until you receive a call from your bank. That being said, there are still a few things that can be done to defend against these types of attacks:
  • Secure your internal machines: Attacks like these are most commonly executed from inside the network. If your network devices are secure then there is less of a chance of those compromised hosts being used to launch a spoofing attack.
  • Don’t rely on DNS for secure systems: On highly sensitive and secure systems that you typically won’t be browsing the Internet on its often a best practice to not use DNS. If you have software that relies on hostnames to function then those can be specified manually in the devices hosts file.
  • Use IDS: An intrusion detection system, when placed and deployed correctly, can typically pick up on most forms of ARP cache poisoning and DNS spoofing.
  • Use DNSSEC: DNSSEC is a newer alternative to DNS that uses digitally signed DNS records to ensure the validity of a query response. DNSSEC is not yet in wide deployment but has been widely accepted as “the future of DNS”.
Wrap Up
DNS Spoofing is a very lethal form of a MITM attack when paired with the right skill level and malicious intent. Using this technique we can utilize phishing techniques to deceptively steal credentials, install malware with a drive-by exploit, or even cause a denial of service condition. In the next article in this series we will look at “pass the hash” attacks.
Related Links:
18 مرداد 1393 برچسب‌ها: مقالات
Understanding Man-in-the-Middle Attacks – ARP Cache Poisoning (Part 1(
Date: 2012-04-17
One of the most prevalent network attacks used against individuals and large organizations alike are man-in-the-middle (MITM) attacks. Considered an active eavesdropping attack, MITM works by establishing connections to victim machines and relaying messages between them. In cases like these, one victim believes it is communicating directly with another victim, when in reality the communication flows through the host performing the attack. The end result is that the attacking host can not only intercept sensitive data, but can also inject and manipulate a data stream to gain further control of its victims.
In this series of articles we will examine some of the most widely used forms of MITM attacks including ARP cache poisoning, DNS spoofing, HTTP session hijacking, passing the hash, and more. As you will mostly find in the real world, most victim machines are Windows-based hosts. That being the case, this series of articles will focus entirely on MITM exploitation of hosts running versions of Windows.
ARP Cache Poisoning
In the first article of this series we will take a look at ARP cache poisoning. One of the oldest forms of modern MITM attack, ARP cache poisoning allows an attacker on the same subnet as its victims to eavesdrop on all network traffic between the victims. I’ve deliberately chosen this as the first attack to examine because it is one of the simplest to execute but is considered one of the most effective once implemented by attackers.
Normal ARP Communication
The ARP protocol was designed out of necessity to facilitate the translation of addresses between the second and third layers of the OSI model. The second layer, or data-link layer, uses MAC addresses so that hardware devices can communicate to each other directly on a small scale. The third layer, or network layer, uses IP addresses (most commonly) to create large scalable networks that can communicate across the globe. The data link layer deals directly with devices connected together where as the network layer deals with devices that are directly connected AND indirectly connected. Each layer has its own addressing scheme, and they must work together in order to make network communication happen. For this very reason, ARP was created, “An Ethernet Address Resolution Protocol”.

Figure 1: The ARP Communication Process
The nitty gritty of ARP operation is centered around two packets, an ARP request and an ARP reply. The purpose of the request and reply are to locate the hardware MAC address associated with a given IP address so that traffic can reach its destination on a network. The request packet is sent to every device on the network segment and says “Hey, my IP address is XX.XX.XX.XX, and my MAC address is XX:XX:XX:XX:XX:XX. I need to send something to whoever has the IP address XX.XX.XX.XX, but I don’t know what their hardware address is. Will whoever has this IP address please respond back with their MAC address?” The response would come in the ARP reply packet and effectively provide this answer, “Hey transmitting device. I am who you are looking for with the IP address of XX.XX.XX.XX. My MAC address is XX:XX:XX:XX:XX:XX.” Once this is completed the transmitting device will update its ARP cache table and the devices are able to communicate with one another.
Poisoning the Cache
ARP cache poisoning takes advantage of the insecure nature of the ARP protocol. Unlike protocols such as DNS that can be configured to only accept secured dynamic updates, devices using ARP will accept updates at any time. This means that any device can send an ARP reply packet to another host and force that host to update its ARP cache with the new value. Sending an ARP reply when no request has been generated is called sending a gratuitous ARP. When malicious intent is present the result of a few well placed gratuitous ARP packets used in this manner can result in hosts who think they are communicating with one host, but in reality are communicating with a listening attacker.

Figure 2: Intercepting Communication with ARP Cache Poisoning
Defending Against ARP Cache Poisoning
The ARP process happens in the background with very little ability to be controlled directly by us. There is no catch all solution, but proactive and reactive stances can be taken if you are concerned about ARP cache poisoning on your network.
Securing the LAN
ARP Cache Poisoning is only a viable attack technique when attempting to intercept traffic between two hosts on the same local area network. The only reason you would have to fear this is if a local device on your network has been compromised, a trusted user has malicious intent, or someone has managed to plug an un-trusted device into the network. Although we too often focus the entirety of our security efforts on the network perimeter, defending against internal threats and having a good internal security posture can help eliminate the fear of the attack mentioned here.
Hard Coding the ARP Cache
One way to protect against the unsecured dynamic nature of ARP requests and replies is to make the process a little less…dynamic. This is an option because Windows-based hosts allow for the addition of static entries into the ARP cache. You can view the ARP cache of a Windows host by opening a command prompt and type the command arp –a.

Figure 3: Viewing the ARP Cache
You can add entries to this list by using the command, arp –s <IP ADDRESS> <MAC ADDRESS>.
In cases where your network configuration does not change often, it is entirely feasible to make a listing of static ARP entries and deploy them to clients via an automated script. This will ensure that devices will always rely on their local ARP cache rather than relying on ARP requests and replies.
Monitoring ARP Traffic with a Third Party Program
The last option for defending against ARP cache poisoning is a reactive approach that involves monitoring the network traffic of hosts. This can be done with a few different intrusion detection systems or through downloadable utilities designed specifically for this purpose. This may be feasible when you are only concerned about a single host, but can be a bit cumbersome to deal with when concerned with entire network segments.
Wrap Up
ARP Cache Poisoning is a great introduction into the world of passive man-in-the-middle attacks because it’s very simple to execute, is a very real threat on modern networks, and is difficult to detect and defend against. In the next article in this series we will focus on name resolution and the concept of DNS spoofing.
18 مرداد 1393 برچسب‌ها: مقالات
Cloud computing and its security challenges
Date: 2012-03-18
Cloud computing is all the talk amongst businesses today. Over the next couple of years cloud computing has the potential to drastically transform the way in which organisations perform computing. The benefits of cloud computing are easily recognised, it offers increased storage, flexibility and most importantly cost reduction. All of these benefits are essential for assisting in the growth of a successful business. The Cloud will impact IT departments, architectures, how operations are run and the most controversial aspect to date, how securing the cloud will be ensured. The clouds outstanding benefits are attracting many organisations; however the one aspect holding most back are the questions relating to the security of the cloud, how secure is ones data when in the cloud and can the security risks be overcome to ensure a secure environment.
Potential cloud security risks to take into consideration, and possible steps to take to reduce the risks
1. Where is your data located
When utilising the cloud technology the chance of one not knowing the location of ones data, where it is hosted or even which country it is located in is very likely. A step closer to trying to secure your data is to agree with the vendor to keeping and processing your organisations data in a particular area. You could also enforce that they abide by the requested jurisdictions privacy requirements, thus keeping the integrity of the data. Different jurisdictions will be governed by different laws and even different encryption requirements and encryption export laws. It is important to observe these laws and ensure compliance or potentially face a fine or legal action.
2. Is your data secluded
Cloud technology works well in that it is able to store many organisations data in a shared environment, thus reducing costs. Customer’s data is thus together in the cloud. The vendor needs to ensure that the data is segregated to reduce security risks. One way to do this is by utilising encryption methods to encrypt the data allowing only specific individuals access to the key. The Encryption methods need to have been thoroughly tested in the environment to ensure that they will be effective.
3. Is privileged user access utilised
Sensitive data processed outside the organisation brings with it a characteristic level of risk, as outsourced services evade the security measures that IT departments enforce internally. Trust is now being placed on outsiders, thus bringing them into the organisation. To decrease the risk one can obtain as much information as possible with regards to the people in contact with your data, and information over how the access to your data is controlled.
4. Does the vendor comply with the necessary regulations
At the end of the day the organisation is ultimately responsible for the security and reliability of their data even if it is held outside the company, within the cloud. To ensure that the necessary regulatory compliances are being adhered to, the vendor should be able to demonstrate to external auditors that the organisations data is secure through providing transparency into all activity’s taking place in the cloud with regards to their data.
5. Disaster recovery options
As much as we would all like to believe otherwise, disasters are always waiting to happen. The organisation needs to be prepared with the correct tools in order to recover. The cloud provider or vendor must be able to inform you of what should happen in case of a disaster. It is essential that multiple sites exist where ones data and application infrastructures are replicated.
6. How to go about investigating unfitting or illegal activity in the cloud
Cloud services lend themselves to being particularly difficult to investigate or near impossible. Due to its nature, logging of data of more than one customer is likely to be found together, also hosts and data centres are constantly changing. Therefore the process of investigation is almost impossible unless the vendor has a tried and tested method that they are able to demonstrate and use effectively.
7. Server elasticity
As previously noted one of the benefits of cloud technology is the fact that it brings about a great degree of flexibility. This could be problematic in the case when the hosting server needs to be provisioned or de-provisioned to reveal the current capacity requirements. Some servers may be reconfigured frequently, without your prior knowledge. This may be challenging to some of the technologies your organisation is relying on within the cloud, as the environment does not lend itself to being static. This is difficult when it comes to securing the data, as traditional methods of securing data rely on an understanding of the network infrastructure, however if it is forever changing and is not constant those security measures would not be suitable.
8. Service Provider downtime
This is a fundamental measure of security often overlooked. The downtime a service provider experiences could be detrimental to your organisation. Reliability with regards to this is essential.
9. Viability for your organisation in the long term
Something to consider when looking for a cloud provider is the viability in the long term. One will need to consider the possibility that you are no longer able to use that particular cloud provider, what routes would be used to ensure the secure transfer of you data to another cloud provider and how you would be able to maintain the integrity of your data.
Cloud computing reduces operational management by the organisation; however the organisation is still held accountable, even though operational responsibilities are held with one or more third parties in the cloud. Therefore when using cloud technology the vendor or cloud provider you choose should be one you are able to trust, one that is completely transparent, with all the information you require and one that has answers to all your questions and holds nothing back.
Companies need to be vigilant; they need to move in with eyes wide open. Not only do they need a trustworthy relationship with the vendor or cloud provider but also need to gain as much information about the third party companies involved that could potentially access their private data. One should also investigate the hosting company used by the provider and possibly seek an independent security audit of their security status.
Security infrastructure is becoming programmable. Like the way organisations pool resources together with regards to computing, cloud security could pool security resources together to ensure privacy and integrity of data in the cloud.
The more insight one has of the potential risks, the more effective one will be at trying to avoid, minimise and control these risks.
18 مرداد 1393 برچسب‌ها: مقالات
Using BitLocker to Encrypt Removable Media (Section 4)
Date: 2012/02/19
This article concludes the series on BitLocker To Go by demonstrating the process of recovering BitLocker keys from the Active Directory.
In my previous article, I explained that Windows Server 2008 R2 offers the ability to store BitLocker keys for removable devices within the Active Directory database. Since I have already shown you how to enable the necessary group policy settings that allow BitLocker keys to be stored in the Active Directory, I wanted to conclude the series by showing you how the key recovery process works.
For the sake of this demonstration, let us pretend that one of the big wigs in your organization has placed the only copies of some critically important files onto a BitLocker encrypted flash drive, and that now he has forgotten the drive’s password. What do you do?
The first step in the recovery process is to insert the BitLocker encrypted flash drive into a computer that’s running Windows 7. When you do, the dialog box shown in Figure A appears, and you are asked to enter the password that is used to unlock the drive. Since the password has been forgotten, we must perform a password recovery.

Figure A: Windows prompts you to enter a password to gain access to the encrypted drive
If you look at the figure above, you will notice that it contains a link labeled I Forgot My Password. Clicking on this link takes you to the screen shown in Figure B.

Figure B: Windows provides recovery options that can be used in the event of a forgotten password
As you look at the screen capture above, the thing that really stands out is the option to type a recovery key. Although we will use this option later on, we are not quite ready to use it yet. If you look carefully at the figure above, you will notice a line of text that says: Your Recovery Key Can Be Identified By: 1A8BBF9A. The hexadecimal number that appears at the end of the text string is unique to the flash drive, and can be used to identify the flash drive during the recovery process. You should therefore write down this number, because you are going to need it later on.
Now we actually have to retrieve the recovery key from the Active Directory. There is just one minor hurdle standing in our way. Although the recovery key for the flash drive is stored in the Active Directory, we need to have a way of retrieving it. None of the administrative interfaces that are currently installed on our server presently offer this capability.
Installing the BitLocker Recovery Password Viewer
Before you can recover BitLocker recovery keys from the Active Directory, you will have to install a utility called the BitLocker Recovery Password Viewer.
To install the BitLocker Recovery Password Viewer, open the Server Manager, and select the Features container. Next, click on the Add Features link, which will cause Windows to open the Add Features Wizard. The Add Features Wizard contains a series of checkboxes that are linked to the various features that you can install. You have to locate the Remote Server Administration Tools option. Expand this option, and then locate a sub option called Feature Administration Tools. Expand the Feature Administration Tools and then select the BitLocker Drive Encryption Administration Utilities check box. Verify that the check boxes beneath this option are also selected, as shown in Figure C, and then click Next.

Figure C: You must enable the BitLocker Drive Encryption Administration Utilities
Now, click the Next button and you will see a screen providing you with a summary of the features that are about to be installed, along with a warning message that a reboot may be required after the installation process completes. Click the Install button, and the necessary binaries will be installed.
When the installation process completes, Windows will display the Installation Results screen, shown in Figure D. Even though Windows Server does not force a reboot, you may have to reboot the server anyway.

Figure D: Even though Windows Server does not force a reboot, you may have to reboot the server anyway
The Key Recovery Process
Now that you have finished installing the various administrative tools, you can move forward with the key recovery process. To recover a BitLocker key, open the Active Directory Users and Computers console, and then right click on the listing for your domain. The resulting shortcut menu will contain a Find BitLocker Recovery Password option, as shown in Figure E.

Figure E: Key recovery is performed through the Active Directory Users and Computers console
When you select the BitLocker Recovery Password option, you will be taken to the Find BitLocker Recovery Password dialog box, shown in Figure F. Remember the eight digit hexadecimal number that uniquely identifies the encrypted drive? This is where you enter that number. Upon doing so, click the Search button and the server will retrieve the drive’s recovery password. If you look at the lower portion of the dialog box, you can see that the recovery password is not the password that the user originally used to encrypt the drive, but rather a 48 digit string of numbers.

Figure F: The recovery key is displayed in the lower portion of the dialog box.
Now that you have the recovery key for the drive, go back to the PC in which you inserted the flash drive, and enter the recovery key into the space provided, as shown in Figure G.

Figure G: Enter the BitLocker recovery key in the space provided.
After you enter the recovery key, you should see a screen similar to the one shown in Figure H, telling you that you have been granted temporary access to the drive. In other words, the drive remains encrypted, and the forgotten password is still in effect. If you were to remove and reinsert the drive at this point, you would have to work through the recovery process all over again unless the user happens to remember the password.

Figure H: Entering the recovery key provides temporary access to the encrypted drive
To avoid having to enter the 48 digit recovery key each time the drive is used, click the Manage BitLocker link. Doing so will take you to the dialog box shown in Figure I, which allows you to change the password that is used to gain access to the drive.

Figure I: After you have gained access to an encrypted drive, you should reset the drive’s password
In this article series, I have explained that BitLocker to Go provides you with an easy way to secure data that is stored on removable media. If you plan to use BitLocker to Go though, you should implement Active Directory based key recovery so as to avoid data loss due to forgotten passwords.
Related Liks:
Using BitLocker to Encrypt Removable Media (Section 3)
18 مرداد 1393 برچسب‌ها: مقالات
Using BitLocker to Encrypt Removable Media (Section 3)
Date: 2012/01/30
This article explains how you can store BitLocker Recovery keys in the Active Directory database.
In my previous article, I talked about how to regulate the way in which BitLocker is used in your organization through the use of group policy settings. As I alluded to towards the end of that article though, one of the big problems with encrypted media is the potential for data loss.
As you know, BitLocker encrypted drives are protected by a password. The problem is that users are prone to forget passwords, and in doing so they could end up permanently locking themselves out of the encrypted drive. Even though the data on the drive is still present, data loss still effectively occurs because the data remains inaccessible to the user. If you really stop and think about it, encrypted data that cannot be decrypted is really no different than corrupt data.
If you think back to the first article in this series, you will recall that when you encrypt a drive with BitLocker, Windows displays a message "How do you want to store your recovery key?", telling you that in the event that the password is forgotten, a recovery key can be used to access the drive. Not only does Windows automatically provide you with this recovery key, it forces you to either print the recovery key or save it to a file.
Having a recovery key to fall back on is a good idea, but in the real world it is just not practical. How many users do you think will even remember that a recovery key even exists, much less where they put the print out? The loss of an encryption key can have catastrophic consequences in a corporate environment where data is often irreplaceable. Thankfully, you do not have to depend on the end users to keep track of their recovery keys. You can store the recovery keys in the Active Directory instead.
Preparing the Active Directory
Before we can configure BitLocker to store recovery keys in the Active Directory, we need to do a bit of prep work. As I’m sure you already know, BitLocker to Go was first introduced with Windows 7 and Windows Server 2008 R2. As such, it stands to reason that if you want to support BitLocker to Go key recovery at the Active Directory level, then you are going to need to run some of the Windows Server 2008 R2 code on your domain controllers.
Believe it or not, you do not have to upgrade all your domain controllers to Windows Server 2008 R2, unless you just want to. Instead, you can simply use a Windows Server 2008 R2 installation DVD to extend the Active Directory schema on the domain controller that is acting as the schema master for your Active Directory forest.
Before I show you how to extend your Active Directory schema, I need to warn you that this procedure assumes that all of your domain controllers are running Windows 2000 Server SP4 or above. If you have older domain controllers, then they must be upgraded before you will be able to perform the necessary schema extensions.
You should also perform a full system state backup of your domain controllers prior to extending the Active Directory schema. If something should go wrong during the extension process, it could have devastating effects on the Active Directory, so it is important to have a good backup that you can fall back on.
You can extend the Active Directory schema by inserting your Windows Server 2008 R2 installation DVD into your schema master. After doing so, open a Command Prompt window using the Run As Administrator option, and enter the following command (where D: represents the drive containing your installation media):
When the ADPrep utility loads, you will be asked to confirm that your domain controllers are all running the appropriate versions of Windows Server. Simply press the C key and then press Enter to start the schema extension process. The entire schema extension should only take a couple of minutes to complete.
Configuring Group Policies
Simply extending the Active Directory schema alone does not force BitLocker to store recovery keys in the Active Directory. For that we are going to have to configure a few group policy settings.
Begin the process by loading the group policy that applies to your workstations into the Group Policy Management Editor. Now, navigate through the console tree to Computer Configuration | Policies | Administrative Templates: Policy Definitions | Windows Components | BitLocker Drive Encryption | Removable Data Drives.
At this point, you should enable the Deny Write Access to Removable Drives Not Protected by BitLocker setting. Actually, this isn’t an absolute requirement, but it does give you a way of forcing users to encrypt their USB flash drives. If you are going to force users to use BitLocker encryption, then you may also want to select the Do Not Allow Write Access to Devices Configured in Another Organization. Again, this isn’t a requirement, but it does help to improve security.
The next step in the process is to enable the Choose How BitLocker Removable Drives Can Be Recovered setting. You can see the dialog box that is displayed when you double click on the Deny Write Access to Removable Drives Not Protected by BitLocker setting. As you can see in the figure, there are a series of check boxes that can be selected when this group policy setting is enabled.
If your goal is to save a copy of each recovery key in the Active Directory, then there are three of these options that you must enable. First, you must select the Allow Data Recovery Agent option. This option should be selected by default, but since this option is what makes the entire key recovery process possible, it is important to verify that the option is enabled.
Next, you will have to select the Save BitLocker Recovery Information to AD DS for Removable Data Drives. As you have probably already figured out, this is the option that actually saves the BitLocker recovery keys to the Active Directory.
Finally, you should select the Do Not Enable BitLocker Until Recovery Information Is Stored To AD DS For Removable Data Drives option. This option forces Windows to confirm that the recovery has been written to the Active Directory before BitLocker is allowed to encrypt the drive. That way, you do not have to worry about a power failure wiping out the recovery key half way through the encryption process.
Although not a requirement, some administrators also like to enable the Omit Recovery Option From The BitLocker Setup Wizard option. This prevents users from saving or printing their own copies of the recovery key.
In this article, I have shown you how to configure the Active Directory to store BitLocker recovery keys for removable drives. In Part 4, I will show you how the recovery process works.
Related Liks:
18 مرداد 1393 برچسب‌ها: مقالات
صفحات: «« « ... 8 9 10 11 12 »