Wednesday, December 07, 2011

Understand Data Loss Prevention System for Deployment

First of all please make a note that DLP is about risk reduction, not threat elimination. It's important to know what kinds of policies can be defined and what enforcement options are available before beginning a deployment. Later, the proper workflow needs to be in place to handle policy violations. While human resources and legal teams are rarely involved in a virus infection, they may be intimately involved when an employee tries to send a customer list to a competitor. Setting up a good baseline early; know what data needs protection, the capabilities of the tools in place to protect it, and the workflow for handling incidents will actually easy the DLP implementation in any organization. The below mentioned Small write-up will help Security Council members to understand what DLP is what are industry best practices adopted by different organization for DLP Implementation.
Basically now data protection efforts of organization are shifting toward the internal users (Vendor/Employee) as compare to external threats like earlier. As 80% of breaches are reported by internal users accidental or intentional only 20% are by external forces. Hence now companies are interested to do the right investment decision and like to protect any potential breached due to their own employees. There are many avenues in which confidential data or proprietary secrets can leave an organization via the Internet:

• Email
• Webmail
• HTTP (message boards, blogs and other websites)
• Instant Messaging
• Peer-to-peer sites and sessions
• FTP

Current firewall and other network security solutions do not include data loss prevention capabilities to secure data in motion. Missing are such important controls as content scanning, blocking of communications containing sensitive data and encryption. While companies have attempted to address the data loss problem through corporate policies and employee education, without appropriate controls in place, employees can (either through ignorance or malicious disregard) still leak confidential company information.


If categorize all the avenues available to employees today to electronically expose sensitive data, the scope of the data loss problem is an order of magnitude greater than threat protection from outsiders. Consider the extent of the effort required to cover all the loss vectors an organization has the potential to encounter:

• Data in motion – Any data that is moving through the network to the outside via Internet.
(Solution provided by: Web-sense, Symantec and CA)
• Data at rest – Data that resides in files systems, databases and other storage methods.
(Solution provided by: Iron port, CA, Vontu)
• Data at the endpoint – Data at the endpoints of the network (e.g. data on USB devices, External drives, MP3 players, laptops, and other highly-mobile devices) (Solution provided  by: Digital Guardian: SOPHOS)

There are two different types of Content Aware DLP solutions available:

1. Single Channel solutions – Focuses on the data loss channel we want to address such as email or Web.
2. Enterprise DLP solutions – Involves lengthy implementations and big budgets. It can also be very disruptive to the organization but delivers much more coverage.

DLP is shipped with hundreds of pre-defined policies. Port authority by WebSense boasts over 140 pre-defined templates for major regulatory statutes. These policies have rules for anywhere from identification of social security numbers to US regulatory laws. The very popular ones are HIPAA, Sarbanes Oxley, GLBA, etc. In addition, vendors are even willing to create a custom policy based on customer requirements. This is based on the business model of a particular customer. Default policies can be fine tuned to suit our needs and gets even more accurate when data matching is applied against context. For Ex. If a payroll employee is observed viewing someone else’s remuneration package, this event is a normal behavior and can be ignored. However, if this event were to occur from another department, the DLP should raise a flag and hence it should be escalated. One key to point to note in writing a signature is the tradeoff between false positives and false negatives. Some vendors call them wide and narrow rules. If the matching occurs at broader scope, it can result in a high number of false positives. On the other hand, we run into the risk of not catching a true positive, if we were to keep the rules too narrow. This is a business decision an organization should make based on the sensitivity level of the content vs. resources allocated for remediation. After all, customers do not want to end up in a similar situation as they are with IDS.

While making a DLP Investment I would advise following points should be considered.

1. Ensure Effective, Comprehensive Coverage :
Means overall, a DLP solution must be able to effectively and comprehensively detect attempted policy violations. This includes:
• Multi-protocol monitoring and prevention
• Content-level analysis of all major file and attachment types
• Selective blocking and/or quarantining of messages
• Automatic enforcement of corporate encryption policies.

2. Make the Solution Unobtrusive:
The next important aspect for a DLP solution is that it’s non-intrusive. Overcoming the challenges of maintaining effective communications (while ensuring management and control of customer and sensitive information) requires both well thought out policies, and processes for monitoring communications content.

3. Should have a Work Flow, Administration and Reporting capabilities:
To help keep total cost of ownership low, the selected product should be simple and fast to implement effectively within the organization’s infrastructure – leveraging plug-and-play capabilities to minimize integration requirements. Robust reporting capabilities allow policy officers to readily access information to:
• Analyze and improve the organization’s DLP capabilities
• Automatically deliver decision-making information in a timely manner
• Easily generate instant reports for executives.

4. Combination of Network/End Point and Heterogeneous.








Friday, November 18, 2011

Design a right Firewall Topology for your Network

            With network security becoming such a hot topic, you may have come under the microscope about your firewall and network security configuration. You may have even been assigned to implement or reassess a firewall design. In either case, you need to be familiar with the most common firewall configurations and how they can increase security. In this article, I will introduce you to some common firewall configurations and some best practices for designing a secure network topology

Setting up a firewall security strategy
At its most basic level, a firewall is some sort of hardware or software that filters traffic between your company’s network and the Internet. With the large number of hackers roaming the Internet today and the ease of downloading hacking tools, every network should have a security policy that includes a firewall design.

 If your manager is pressuring you to make sure that you have a strong firewall in place and to generally beef up network security, what is your next move? Your strategy should be twofold:

  • Examine your network and take account of existing security mechanisms (routers with access lists, intrusion detection, etc.) as part of a firewall and security plan.
  • Make sure that you have a dedicated firewall solution by purchasing new equipment and/or software or upgrading your current systems.
Keep in mind that a good firewall topology involves more than simply filtering network traffic. It should include:
·   A solid security policy.
·   Traffic checkpoints.
·   Activity logging.
·   Limiting exposure to your internal network.

Before purchasing or upgrading your dedicated firewall, you should have a solid security policy in place. A firewall will enforce your security policy, and by having it documented, there will be fewer questions when configuring your firewall to reflect that policy. Any changes made to the firewall should be amended in the security policy.

            One of the best features of a well-designed firewall is the ability to funnel traffic through checkpoints. When you configure your firewall to force traffic (outbound and inbound) through specific points in your firewall, you can easily monitor your logs for normal and suspicious activity.

How do you monitor your firewall once you have a security policy and checkpoints configured? By using alarms and enabling logging on your firewall, you can easily monitor all authorized and unauthorized access to your network. You can even purchase third-party utilities to help filter out the messages you don't need.

It's also a good practice to hide your internal network address scheme from the outside world. It is never wise to let the outside world know the layout of your network.

Firewall terminology
Before we look at specific firewall designs, let's run through some basic firewall terminology you should become familiar with:

·        Gateway—A gateway is usually a computer that acts as a connector from a private network to another network, usually the Internet or a WAN link. A firewall gateway can transmit information from the internal network to that Internet in addition to defining what should and should not be able to pass between the internal network and the Internet.
·        Network Address Translation (NAT)—NAT hides the internal addresses from the external network (Internet) or outside world. If your firewall is using NAT, all internal addresses are translated to public IP addresses when leaving the internal network, thus concealing their original identity.
·        Proxy servers—A proxy server replaces the network's IP address and effectively hides the actual IP address from the rest of the Internet. Examples of proxy servers include Web proxies, circuit level gateways, and application level gateways.
·        Packet filtering firewall—This is a simple firewall solution that is usually implemented on routers that filter packets. The headers of network packets are inspected when going through the firewall. Depending on your rules, the packet is either accepted or denied. Because most routers can filter packets, this is an easy way to quickly configure firewall rules to accept or deny packets. However, it's difficult for a packet filtering firewall to differentiate between a benign packet and a malicious packet.
·        Screening routers—This is a packet filtering router that contains two network interface cards. The router connects two networks and performs packet filtering to control traffic between the networks. Security administrators configure rules to define how packet filtering is done. This type of router is also known as an outside router or border router.
·        Application level gateway—This type of gateway allows the network administrator to configure a more complex policy than a packet filtering router. It uses a specialized program for each type of application or service that needs to pass through the firewall.
·        Bastion host—A bastion host is a secured computer that allows an untrusted network (such as the Internet) access to a trusted network (your internal network). It is typically placed between the two networks and is often referred to as an application level gateway.
·        Demilitarized zone (DMZ)—A DMZ sits between your internal network and the outside world, and it's the best place to put your public servers. Examples of systems to place on a DMZ include Web servers and FTP servers.
Now that we have gone over some of the basics, it is time to discuss common firewall designs.
Screening router
A screening router is one of the simplest firewall strategies to implement. This is a popular design because most companies already have the hardware in place to implement it. A screening router is an excellent first line of defense in the creation of your firewall strategy. It's just a router that has filters associated with it to screen outbound and inbound traffic based on IP address and UDP and TCP ports. 
 
          If you decide to implement this strategy, you should have a good understanding of TCP/IP and how to create filters correctly on your router(s). Failure to implement this strategy properly can result in dangerous traffic passing through your filters and onto your private LAN. If this is your only device, and a hacker is able to pass through it, he or she will have free rein. It's also important to note that this type of configuration doesn't hide your internal network IP addresses and typically has poor monitoring and logging capabilities.

            If you have little or no money to spend and need a firewall configuration quickly, this method will cost you the least amount of money and will let you use existing routers. It's an excellent start to your firewall strategy and is a good device to use on networks that use other security tools as well.

Screened host firewalls
           
A screened host firewall configuration uses a single homed bastion host in addition to a screening router. This design uses packet filtering and the bastion host as security mechanisms and incorporates both network- and application-level security. The router performs the packet filtering, and the bastion host performs the application-side security. This is a solid design, and a hacker must penetrate the router and the bastion host to compromise your internal network.
            Also, by using this configuration as an application gateway (proxy server), you can hide your internal network configuration by using NAT translation. 

            The above design configures all incoming and outgoing information to be passed through the bastion host. When information hits the screening router, the screening router filters all data through the bastion host prior to the information passing to the internal network.
            You can go one step further by creating a dual-homed bastion host firewall. This configuration has two network interfaces and is secure because it creates a complete physical break in your network.

Demilitarized zone (DMZ) topology
A DMZ is the most common and secure firewall topology. It is often referred to as a screened subnet. A DMZ creates a secure space between your Internet and your network.

A DMZ will typically contain the following:
·         Web server
·         Mail server
·         Application gateway
·         E-commerce systems (It should contain only your front-end systems. Your back-end systems should be on your internal network.)


            A DMZ is considered very secure because it supports network- and application-level security in addition to providing a secure place to host your public servers. A bastion host (proxy), modem pools, and all public servers are placed in the DMZ.


            Furthermore, the outside firewall protects against external attacks and manages all Internet access to the DMZ. The inside firewall manages DMZ access to the internal network and provides a second line of defense if the external firewall is compromised. In addition, LAN traffic to the Internet is managed by the inside firewall and the bastion host on the DMZ. With this type of configuration, a hacker must compromise three separate areas (external firewall, internal firewall, and the bastion host) to fully obtain access to your LAN.

            Many companies take it one step further by also adding an intrusion detection system (IDS) to their DMZ. By adding an IDS, you can quickly monitor problems before they escalate into major problems.

Defense in Depth: The operations factor

T
he first two pillars of the three-tiered Defense in Depth concept of network security: people and technology. The final pillar of Defense in Depth is operations. This phase of your security model is where policy meets reality.
Logic dictates that something within your network will break down--whether due to human error, malicious behavior, or hardware failure. Your operations plan must be sufficient to cope with these threats; to do so, it must meet five basic criteria: The plan must be comprehensively documented, widely supported, and it must reflect your current operations while allowing for both growth and the possibility of disaster

Thursday, September 15, 2011

Different type of cyber crimes and their investigation.

1. Hacking
Hacking in simple terms means illegal intrusion into a computer system without the permission of the computer owner/user.

Evidences to be secured: HDD, Mails with Header, Network Logs
Investigation Mechanism: IP Tracing, Location Tracing Mac address Verification and figure printing

2. Phishing
It is technique of pulling out confidential information from the bank/financial institutional account holders by deceptive means.

Evidences to be secured: Mails with Header, Network Logs.
Investigation Mechanism: Mail Tracing, IP Tracing, Location Tracing.

3. Credit Card Fraud
You simply have to type credit card number into www page off the vendor for online transaction if electronic transactions are not secured the credit card numbers can be stolen by the hackers who can misuse this card by impersonating the credit card owner.

Evidence to be secured: Transaction logs, network logs, product delivery address, sometime HDD.
Investigation Mechanism: Transaction Verification, IP Tracing, Location Tracing, Mac Address Verification and finger printing, log Analysis.

4. Denial of Service (SPAMMING)
This is an act by the criminal, who floods the bandwidth of the victim’s network or fills his e-mail box with spam mail depriving him of the services he is entitled to access or provide.

Evidences to be secured: Mails with Header, Network Logs.
Investigation Mechanism: Mail Tracing, IP Tracing, Location Tracing, Log Analysis.

5. VIRUS Dissemination
Malicious software that attaches itself to other software...
(Virus, worms, Trojan horse, Time bomb, Logic Bomb, Rabbit and Bacterium are the malicious software’s)

Evidences to be secured: HDD, Mails with Header, Network Logs.
Investigation Mechanism: Mail Tracing, IP Tracing, Location Tracing, Log Analysis, Finger printing.

6. Software Piracy
Theft of software through the illegal copying of genuine programs or the counterfeiting and
Distribution of products intended to pass for the original..
• Retail revenue losses worldwide are ever increasing due to this crime
• Can be done in various ways- End user copying, Hard disk loading, Counterfeiting, Illegal downloads from the internet etc..

Evidences to be secured: HDD, Mails with Header, Network Logs
Investigation Mechanism: Mail Tracing, Log Analysis, Finger printing.

7. Pornography
Pornography is the first consistently successful ecommerce product.
• Deceptive marketing tactics and mouse trapping technologies Pornography encourage customers

To access their websites.
• Anybody including children can log on to the internet and access websites with pornographic
Contents with a click of a mouse.
• Publishing, transmitting any material in electronic form which is lascivious or appeals to the prurient Interest is an offence under the provisions of section 67 of I.T. Act -2000.

Evidences to be secured: HDD, Network Logs
Investigation Mechanism: IP Tracing, Location Tracing, Log Analysis, Finger printing.

8. PAEDOPHILIES
THE SLAUGHTER OF INNOCENCE (Paedophilia or sexual attraction to children by an adult, is a sickness that does not discriminate by race, class, or age.

1. Instant access to other predators worldwide;
2. Open discussion of their sexual desires ways to lure victims;
3. Mutual support of their adult child sex philosophies;
4. Instant access to potential child victims worldwide;
5 Disguised identities for approaching children, even to the point of presenting as a member of teen groups;
6 Ready access to "teen chat rooms" to find out how and why to target as potential victims;
7 Shared ideas about Means to identify and track down home contact information;
8 Ability to build a long-term "Internet" relationship with a potential victim, prior to attempting to engage the child in physical contact.

Evidences to be secured: HDD, Mails with Header, Network Logs
Investigation Mechanism: Identity Verification, Mail Tracing, IP Tracing, Location Tracing, Log Analysis, Finger printing.

9. IRC Crime
Internet Relay Chat (IRC) servers have chat rooms in which people from anywhere the world can come together and chat with each other
• Criminals use it for meeting coconspirators.
• Hackers use it for discussing their exploits / sharing the techniques
• Pedophiles use chat rooms to allure small children
• Cyber Stalking - In order to harass a woman her telephone number is given to others as if she
Wants to befriend males
Evidences to be secured: HDD, Network Logs
Investigation Mechanism: Identity Verification, IP Tracing, Location Tracing, Log Analysis, Finger printing.

10. NET Extortion
Copying the company’s confidential data in order to extort said company for huge amount.

Evidences to be secured: HDD, Mails with Header, Network Logs
Investigation Mechanism: Identity Verification, Mail Tracing, IP Tracing, Location Tracing, Log Analysis, Finger printing.

11. Spoofing ( SMS Spoofing)
Getting one computer on a network to pretend to have the identity off another computer, usually one with special access privileges, so as to obtain access to the other computers on the network.

Evidences to be secured: Mails with Header, Network Logs
Investigation Mechanism: Mail Tracing, IP Tracing, Location Tracing, Log Analysis.

12. Threatening
Threats that can create a fear by using computers. Most of the time these threats are generated from email, blogs and social network posts.

Evidences to be secured: HDD, Mails with Header, Network Logs
Investigation Mechanism: Identity Verification, Mail Tracing, IP Tracing, Location Tracing, Log Analysis, Finger printing.

13. Identity Theft
Identity theft is the fastest growing crime in the U.S., with over nine million victims each year. Just being careful isn't enough to protect your identity. Identity theft occurs when someone uses your personal information, such as your Social Security number, name or credit card number, without your permission, to commit fraud or other crimes. A thief could take out a mortgage in your name or commit a crime and pretend to be you when caught. Thieves can even use your personal information to apply for a job or use your medical insurance! . (Information Technology Act 2000 Chapter IX Sec 43 (b))

Evidences to be secured: HDD, Mails with Header, Network Logs
Investigation Mechanism: Identity Verification, Mail Tracing, IP Tracing, Location Tracing, Log Analysis, Finger printing.

14. Carding
Carding is a term used for a process to verify the validity of stolen card data. The thief presents the card information on a website that has real-time transaction processing. If the card is processed successfully, the thief knows that the card is still good. The specific item purchased is immaterial, and the thief does not need to purchase an actual product; a Web site subscription or charitable donation would be sufficient. The purchase is usually for a small monetary amount, both to avoid using the card's credit limit, and also to avoid attracting the card issuer's attention. A website known to be susceptible to carding is known as a cardable website.
In the past, carders used computer programs called "generators" to produce a sequence of credit card numbers, and then test them to see which valid accounts were. Another variation would be to take false card numbers to a location that does not immediately process card numbers, such as a trade show or special event.

Evidence to be secured: Transaction logs, network logs, product delivery address, sometime HDD.
Investigation Mechanism: Transaction Verification, Identity Verification, IP Tracing, Location Tracing, Mac Address Verification and finger printing, log Analysis.

15. Cracking
Software cracking is the modification of software to remove or disable features which are considered undesirable by the person cracking the software, usually related to protection methods: copy protection, trial/demo version, serial number, hardware key, date checks, CD check or software annoyances like nag screens and adware.

Password cracking is the process of recovering passwords from data that has been stored in or transmitted by a computer system. A common approach is to repeatedly try guesses for the password. The purpose of password cracking might be to help a user recover a forgotten password (though installing an entirely new password is less of a security risk, but involves system administration privileges), to gain unauthorized access to a system, or as a preventive measure by system administrators to check for easily crackable passwords. On a file-by-file basis, password cracking is utilized to gain access to digital evidence for which a judge has allowed access but the particular file's access is restricted.

Evidences to be secured: HDD, Applications and Systems Logs
Investigation Mechanism: IP Tracing, Location Tracing Mac address Verification and figure printing

16. Salami Attack
In such crime criminal makes insignificant changes in such a manner that such changes would go unnoticed. Criminal makes such program that deducts small amount like Rs. 2.50 per month from the
Account of all the customer of the Bank and deposit the same in his account. In this case no account holder will approach the bank for such small amount but criminal gains huge amount.

Evidences to be secured: Transaction Logs
Investigation Mechanism: Transaction Verification, Identity Verification, IP Tracing, Location Tracing, Mac Address Verification and finger printing, log Analysis.

17. Phreakers
Phreaking is a slang term coined to describe the activity of a culture of people who study, experiment with, or explore telecommunication systems, such as equipment and systems connected to public telephone networks. As telephone networks have become computerized, phreaking has become closely linked with computer hacking.[1] This is sometimes called the H/P culture (with H standing for hacking and P standing for phreaking).

The term phreak is a portmanteau of the words phone and freak, and may also refer to the use of various audio frequencies to manipulate a phone system. Phreak, phreaker, or phone phreak are names used for and by individuals who participate in phreaking. A large percentage of the phone Phreaks were blind.[2][3] Because identities were usually masked, an exact percentage cannot be calculated.
http://www.theregister.co.uk/2007/03/22/voip_fraud/

18. IP Infringement
An intellectual property infringement is the infringement or violation of an intellectual property right. There are several types of intellectual property rights, such as copyrights, patents, and trademarks. Therefore, an intellectual property infringement may for instance be a
Copyright infringement
Patent infringement
Trademark infringement
Techniques to detect (or deter) intellectual property infringement include:
Fictitious entry, such as:
Fictitious dictionary entry. An example is Esquivalience included in the New Oxford American Dictionary
Trap street, a fictitious street included on a map for the purpose of "trapping" potential copyright violators of the map
Evidences to be secured: HDD, Mails with Header

19. H/W S/W Sabotage
Sabotage is a deliberate action aimed at weakening another entity through subversion, obstruction, disruption, or destruction.
Evidences to be secured: Pictures of Damaged Hardware’s, and HDD for S/w

20. Cyber Terrorism
Premeditated, usually politically-motivated violence committed against civilians through the use of, or with the help of, computer technology.

Investigation Mechanism: Identity Verification, Mail Tracing, IP Tracing, Location Tracing, Log Analysis, Finger printing.


21. Cyber Vandalism
Damaging or destroying data rather than stealing or misusing them (as with cyber theft) is called cyber vandalism. This can include a situation where network services are disrupted or stopped.

Evidences to be secured: HDD, Mails with Header, Web Page Dumps of social Network Sites..
Investigation Mechanism: Identity Verification, Mail Tracing, IP Tracing, Location Tracing, Log Analysis, Finger printing.


22. Cyber Contraband
Transferring illegal items through the internet (such as encryption technology) that is banned in some locations.
• Sale Of Narcotics
• Sale & Purchase through net..
• There are web site which offer sale and shipment off contrabands drugs..
• They may use the techniques off steganography for hiding the messages..
• Sale Of Not Permitted Encryption Software and Hardware’s

23. Cyber Laundering
Electronic transfer of illegally-obtained monies with the goal of hiding its source and possibly its destination.
Evidences to be secured: Transaction Logs, Applications Logs, Mails with Header, Network Logs

24. Cyber Stalking
Cyberstalking is the use of the Internet or other electronic means to stalk or harass an individual, a group of individuals, or an organization. It may include false accusations, monitoring, making threats, identity theft and damage to data or equipment, the solicitation of minors for sex, or gathering information in order to harass.
Evidences to be secured: HDD, Applications and Systems Logs

25. CYBER Defamation
The Criminal sends emails containing defamatory matters to all concerned off the victim or post the defamatory matters on a website.(Disgruntled employee may do this against boss,, ex-boys friend against girl,, divorced husband against wife etc.)

Evidences to be secured: HDD, Mails with Header, Web Page Dumps of social Network Sites..
Investigation Mechanism: Identity Verification, Mail Tracing, IP Tracing, Location Tracing, Log Analysis, Finger printing.

26. Cyber Squatting
The misleading use of trademarks for Internet domain names. using a domain name with bad faith intent to profit from the goodwill of a trademark belonging to someone else. The cyber squatter then offers to sell the domain to the person or company who owns a trademark contained within the name at an inflated price.

Evidences to be secured: HDD, Mails with Header, Web Page Dumps of social Network Sites..
Investigation Mechanism: Identity Verification, Mail Tracing, IP Tracing, Location Tracing, Log Analysis, Finger printing.

27. Pay-Per-Click Click Fraud



28. Pump & dump schemes

"Pump and dump" schemes, also known as "hype and dump manipulation," involve the touting of a company's stock (typically microcap companies) through false and misleading statements to the marketplace. After pumping the stock, fraudsters make huge profits by selling their cheap stock into the market.

Pump and dump schemes often occur on the Internet where it is common to see messages posted that urge readers to buy a stock quickly or to sell before the price goes down, or a telemarketer will call using the same sort of pitch.

References:-
1. Mumbai Police Cyber Crime Awareness Program.
2. http://www.slideshare.net/sanjay_jhaa/cyber-crimeppt-1
3.http://www.redorbit.com/news/technology/2021986/cybersquatting_activity_jumped_28_percent_in_2010/index.html
4. http://www.wipo.int/pressroom/en/articles/2011/article_0010.html

Sunday, July 24, 2011

Syslog and Auditing Utilities

Recently during audit we identified very less system events in centralised syslog server. Investigating further found that syslog configuration was currect in all the 100 servers. But then what's wrong. we are getting only the logs of syslog daemon login not for other.  Found the basics was wrong amoung them. Although log forwarding was configure but what should be logged was not defined. Enclosed the recommended setting to log all the login events. Specially for Solaris.
Configure syslog messages by increasing the logging severity level for the login daemons
-          vi the /etc/syslog.conf file to change the daemon facility to a higher level, ie change to auth.notice to log each system login authentication
-          Manually stop & start syslogd to set changes:
-          /etc/init.d/syslogd stop
-          /etc/init.d/syslogd start
-     Also, ensure the /etc/default/login file has the entry SYSLOG=YES to log all root logins and attempts.

Configure syslog messages by increasing the logging severity level for the telnet daemons
-          vi the /etc/syslog.conf file to change the daemon facility to a higher level, ie change to daemon.notice to log each system service (ie ftp, telnet, etc.)
-          vi the file /etc/init.d/inetsvc to add trace mode (-t) for inetd, line should read:
-          /usr/sbin/inetd –s –t &
-          Manually stop & start syslogd to set changes:
-          /etc/init.d/syslogd stop
-          /etc/init.d/syslogd start
-          Must manually stop & start inetsvc
-          /etc/init.d/inetsvc stop
-          /etc/init.d/inetsvc start

Sunday, June 26, 2011

VM Sprawl a burning issue !

Server virtualization has been extremely successful. It has reduced physical server counts, increased business flexibility and made DR planning simpler. But server virtualization has also brought its own set of challenges, one of which is virtual machine (VM) sprawl. VM sprawl is the ‘weed-like’ growth in VMs that, similar to ‘NT server sprawl’ a decade ago, has become a management problem for IT administrators everywhere.

VM Sprawl a burning issue in all virtual world, Oh people who are yet to implement or in planning phase to implement might be does not come across this buzz word. But friends of mine who are already sailing in the boat of virtualization are already worried about it. The entire TCO and ROI of virtualization are proving to be failure in just 2-3 year time frame.  Maximum gain and savings are getting returned after 5 year if you are investing in infrastructure but just after 3 year it’s started draining a hefty about from your budget just like petrol prices.
Now some of you are interested to know what is VM Sprawl. So there you go.

The web definition is presented as.
The number of virtual machines (VM) running in a virtualized infrastructure increases over time, simply because of the ease of creating new VMs, not because those VMs are absolutely necessary for the business. Concerns with VM sprawl are the overuse of the infrastructure if it is not needed and the cost of licenses for virtual machines that may not have been required.
Because of this ease of deployment virtual servers are routinely ‘stood up’, almost as soon as they’re requested. It seems that this happens without much thought being given to how important the application is or the length of time the VMs need to be deployed. There are cases of VM growth rates approaching 125% per year, with the majority of those VMs being servers that never existed before the switch to virtualization.

Definition for tech gigs
When the number of virtual machines (VMs) on the network reaches the point where an administrator can't manage them effectively -- or where the VMs start demanding excessive host resources -- that means there is virtual sprawl.
Virtualization sprawl or VM sprawl is defined as a large amount of virtual machines on your network without the proper IT management or control. For example, you may have multiple departments that own servers begin creating virtual machines without proper procedures or control of the release of these virtual machines.
Now, if you let months go by, bottlenecks begin to appear on servers or crashes occur because system resources are low. Now the research begins by an IT department, and they now begin to understand the nightmare that has become their reality.
Hypothetically, a company that had 10 physical servers one year ago might have dropped that number down to eight with virtualization. But today, that company might now have 25 VMs running on those eight servers. The number of physical servers the company needs to manage has dropped by 20%, but the number of operating system instances has increased by 150%!
The reason for this growth is simple: Engineers and users have gotten used to the ease with which they can deploy a virtual machine. Application users continually ask for their own server. With VMware, engineers can easily accommodate those requests. Savvy users realize how easy it is to get dedicated server space, so, he said, the number of VMs keeps increasing.

What is the cost of an orphaned VM?
The truth is that orphaned VMs are not really idle. They’re still consuming memory and CPU cycles and burdening the hypervisor to continually check-in to see if the VM needs additional resources. And, they’re consuming disk resources, which, can be quite high, thanks to the practice of using templates to make the set up easier. Most administrators set a “safe” file size in their VM templates to make sure there’s always enough disk capacity. It’s very likely that idle VMs can be tying up TBs of excess disk space in a typical environment. Most server virtualization environments have made the extra investment and deployed shared storage for VM flexibility which means this wasted capacity is coming at a premium price.
 Orphaned VMs also unnecessarily add to the cost and complexity of the data protection processes. The data associated with these orphaned systems is often included as part of a default replication strategy which takes disk space at a DR site. These orphaned VMs also consume backup resources, as they’re saved when full backups are executed and examined during each incremental backup, to confirm that no changes have occurred since the last backup. Orphaned VMs can also have an impact on the performance of other VMs on the same server, so it is critical that administrators keep track all of the VMs sharing and drawing on the same resources.
Compared to a physical server, the effort required to identify, turn off and archive an orphaned virtual machine is minimal. Physical machines need to be powered off, de-racked and physically stored or securely discarded. If suddenly an application needs to be regenerated the physical deployment has to occur all over again. A virtual system can be turned off and the virtual machine image can be archived to less expensive storage. Returning to operation requires only a few clicks and the time to do a disk to disk transfer.
If you are in implementation or in planning phase the five basic questions should be answered before you should take any decision those questions are very well listed at http://searchservervirtualization.techtarget.com/Control-VM-sprawl-in-your-virtual-server-infrastructure. Also there are various ways by which we can put control on VM Sprawl issue. Some of them are enclosed for easy of reference.
Identifying orphaned VMs should be the first step in getting VM sprawl under control. These are server instances that had been set up for a specific purpose, but outlived their usefulness quickly and so were abandoned. For example, a request may come in for a VM to test a new version of an application. The server is only needed for about 30 days, but after the testing is done, it just sits idly - another orphaned server, with no task to perform.

Reactive measures
In its simplest form VM sprawl reactive resolution can start by general house cleaning, this won’t require you to purchase a product as using Virtual center can quite easily accomplish and target reductions if needed. For example some VM’s might be not registered on ESX hosts; some might be replicated or spun off to a clone due to original operational issues when the app team or ISV deployed the VM. You may also find that your actual presented VMDK’s for VM’s are way under filled so they can be shrunk to regain space.
On the consumed storage issues, vSphere 4 introduces a few added pieces of functionality which will aid and reduce this in future, any recommendations are based on current releases. Main features include Thin Provisioning of VM’s, this will enable you to grow VM usage and not have what is effectively whitespace within your VMDK’s unable to be used.

Identification
The first step is to identify these virtual machines and archive them out of the environment, or at least turn them off. A monitoring tool like Vizioncore’s vFoglight can provide data on resource utilization, template efficiency and deployment strategies. These tools will monitor from a virtual machine view, a vCenter view or a data center view, essential to detecting virtual machines that are inactive. They will also allow the close monitoring of specific resources that can provide additional clues to identifying orphaned VMs. The ability to examine a VM over the course of time is critical. Low memory and CPU utilization for one night does not justify the decommissioning of a VM, but over the course of a few weeks, it likely does.

Archive
Once the orphaned VMs are identified they can then be dealt with. For VMs that will likely see resumed use, simply tag and turn them off. This is a key advantage over physical systems. If there are applications in the environment that are only run quarterly, for example, it’s easy to turn them off and on as needed. Physical systems require physical interaction and typically are not used on an as-needed basis like this.
VMs that are deemed highly unlikely to be needed in the future can be archived to a secondary disk tier that’s lower in cost per GB and more power efficient. Using tools like Vizioncore’s vRanger, the archived VMs can be recalled with a view clicks. This provides the ability to free up all the disk resources discussed earlier and store the server in a secure state, in case there’s a need to show chain of custody in a legal action.

Proactive planning and prevention
Every virtualized environment should have at least some kind of documented audit, if you have not got a CMDB then in simplest form an Excel spreadsheet provides a simplistic view of your Virtual Infrastructure and allocation. Virtualcenter has exportable reporting built in to contribute to build even a simple spreadsheet, to see this in action withinin your Virtualcenter today goto “VM and Template View” then select the highest level folder then select “File > Export > Export List”. Some VI Admins may be quite clever with powershell scripts or by building SQL queries but this is quick and easy and intuitive. You can use this type of audit to also help capacity planning for your environment, this enables you to monitor how much space you have left and perform simplistic “What If” analysis on how much disk, RAM and CPU resource you would have when adding a new machine that is being requested.

Control and Automation
Once the existing environment has been cleared of orphaned systems, the next step is to put procedures in place to keep VM sprawl from happening in the future. With products like Vizioncore’s vControl and the public domain scripting capabilities of VESI the whole process can be automated. For example, more granular use of templates can be instituted. During their creation the administrator can be prompted for the needed VM disk size to keep utilization efficient. They can also provide VM expiration dates and the name of the VM requester. This information can be embedded into the notes section of the VM. A subsequent task could then check for expired VMs and email the requester for authorization to turn it off. A final task could then be run which turns off all expired and confirmed VMs. Especially in server virtualization, this kind of broad automation is critical to enable system administrators to increase the amount of VMs that they can manage.
 Effective management of VM sprawl is enabled by having the right tools. Some of these capabilities exist within the server virtualization software, but need the help of automation tools to allow administrators to take full advantage of them. For exacting control however, third-party programs that can monitor and archive these virtual machines are required. Through this combination of internal utilities and external software tools, the ‘great VM sprawl challenge’ can be managed and the ROI on server virtualization projects increased.
Some of the good VM Sprawl Management Tools are listed at http://www.webmasters.org/a/vmware/vmware-sprawl/
Again VMware Virtualcenter will at some point this year have functionality within a module called CapacityIQ to enable you to gain this functionality from within the vCenter console, for more information see http://www.vmware.com/products/vcenter-capacityiq/ on this. I’ve seen it in action and its great, it provides out of the box functionality which will most certainly aid what I’ve said about within this post.
The ease with which VMs are created makes it that much easier for VMs to be launched and moved willy-nilly regardless of the security and software licensing cost issues, just to name two common problems. Vendors of course have been hip to these challenges. This month, Embotics Corp. released version 2.0 of its V-Commander management software designed to automatically nip virtual sprawl in the bud. One way the software does this is by automatically enforcing policy dictating such things as VM expiration dates and through role-based security access that defines just who can do what in terms of VM creation and migration.

The Rolls Royce solution
For larger enterprise sized Virtual environments, keeping track of the constant demand and growth demand is impossible and to succeed IT services ideally need to be self service based with the end user or customer being able to request what they want through web mechanism. It would sound stupid to provide the enduser with control to increase even more the created problem of sprawl that you are experiencing, however to combat this the SSP (self service portal) can be provided with delegated privileges, pre defined object creation control, approval processes to higher level management or project support offices and also they can provide proactive benefits such as what if analysis and tombstone of Virtual Machines. All policy within the technology which is applied is set by IT governance policies and defined according to business requirement within the tools.
Two example products which provide self service portals include;
These technologies are currently rather low on uptake and adoption within organisations today, there maybe more technologies on the market but with using example functionality in the above products we will certainly start to see more and more as IT departments struggle with the demands from the business for Infrastructure. I also predict that the technologies will also start to become known as has with VMware the killer app to reduce lost productivity gain within organisations and project teams.
The issues today with the products are they currently they do have medium to large price tags associated which puts off the typical bean counter when businesses cases are put forward, so before building any proposals do your research on the product and see where you feel it is able to reduce and cut current tedious expensive business processes, VM Sprawl and improve your budgeting cost projects so this can be equated into a measurable deliverable ROI post deployment of such product.

Tuesday, May 17, 2011

Network based Client Less DLP vs Endpoint Agent Based DLP

Although my blog posting on how to enable ssh in Cisco devices is highest hit and while questions to blogger was on DLP.

 In the era of Swami Nityanand scandle, wiki leaks DLP solution is attracting lots of attention. People and teck guru are confused on solutions. The most common and interesting question which I am getting from readers is which DLP solution is better, Network Based Client Less or agent based.

 Frankly speaking both the solutions have their own merits and demerits. It's actually depend on what kind of information you are trying to protect. For example in BPO or Call center envoirnment the credit card numbers and PAN numbers are critical. While in pharma company or in Research based organization the contents which you want to protect is diagrams and formula.

In first category the fingerprints scanning work better which can be deployed on network without stressing the end points resources.  While in second case fingure Pints didn't work at all also the end points are having more memory and cpu due to which the client  based DLP deployment will make sense.

 Network based DLP will ease the computing load and if you know kind of data patterns then in virtulised environment it's boon. While if you have very intelligent user base who use the sophisticated tools to do their work and no fixed pattern can be deloped and you wan't to restrict even though the print screen and contents in image format the client based DLP is recommended.

Thursday, April 07, 2011

Why IPv6 ...?

The next Gen Internet Protocol to build better, flexible networks and new applications. It's now a mature, stable protocol defined by IETF RFC 2460 in Dec 1998. It's look more like IPC/SPX protocol of Novell for me. I am trying to collect pros and cons to move on to next Gen IP.

  1. Study conducted by NIST, RTI International and NTIA says that IPV6 is expected to return $10 for each dollar invested.
  2. Support jumbo datagram packet of 4GB soon be 32 GB (with 100G ethernet) as compare to 64KB in IPv4. (Means Storage on IP like FCOE on fly encryption is possible)
  3. Dramatically increase address space of 128 bits vs 32 bits in IPV4 which allow for 340x10^36 unique addresses.
  4. Mandated security with support for IPSec encryption build in.
  5. Improved peers discovery (router and hosts) with ability to auto-configure both client and server. Means driving simplicity.
  6. Enhance mobility with mobile IPv6.
  7. Enhance multicast capabilities including scope management.
  8. QoS enhancement for better audio and video traffic in real time.
  9. Flow label specification, increases effeciency of network utilization from 27% to 81%.
  10. Reserved space with datagram for devlopment and research.
  11. Restoration of the end-to-end model for internet.

That's what I collected so far. Will keep on updating as sson as I will get more points.

Friday, March 18, 2011

Understand the Computer Browser Service


To collect and display all computers and other resources on the network, Windows NT uses the Computer Browser service. For example, opening Network Neighborhood displays the list of computers, shared folders, and printers; the Computer Browser service manages this list. Every time Windows NT is booted, this service also starts. Computer Browser is responsible for two closely related services: building a list of available network resources and sharing this list with other computers. All Windows NT computers run the Computer Browser service, but not all of them are responsible for building the list. Most computers will only retrieve the list from the computers that actually collect the data and build it. Windows NT computers can therefore have different roles. Let's take a look at them:

Domain master browser: The primary domain controllers (PDCs) handle this role. The PDCs maintain a list of all available network servers located on all subnets in the domain. They get the list for each subnet from the master browser for that subnet. On networks that have only one subnet, the PDC handles both the domain master browser and the master browser roles.

Master browsers: Computers maintaining this role build the browse list for servers on their own subnet and forward the list to the domain master browser and the backup browsers on its own subnet. There is one master browser per subnet because all browsing works on broadcasting system.

Backup browsers: These computers distribute the list of available servers from master browsers and send them to individual computers requesting the information. For example, when we open Network Neighborhood, our computer contacts the backup browser and requests the list of all available servers.

Potential browsers: Some computers don't currently maintain the browse list, but they're capable of doing so if necessary, which designates them as potential browsers. If one of the existing browsers fails, potential browsers can take over.

Nonbrowsers: These are computers that aren't capable of maintaining and distributing a browse list.

Most browsing tasks are performed automatically without any help from the administrator. The election process, which determines which computer will function as the master browser, takes place when:

• The primary domain controller (PDC) is booted.
• A backup browser is unable to obtain an updated browse list from the existing master browser.
• A computer is unable to obtain a list of backup browsers from the master browser.
• Only the master browser is elected. As we explained last time, the domain master browser is always the PDC. If the PDC is unavailable, there is no domain master browser.



There are times when we might want to modify this behavior and interfere with the browsing process. We can do this by tweaking two registry entries. Open the Registry Editor and navigate to:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Browser\Parameters\


MaintainServerList
Yes is the default setting for all Windows NT domain controllers and configures the computer to function as a backup browser or a master browser. Setting this entry to No will configure the computer to function as a nonbrowser. Selecting Auto will configure the computer to become a potential browser, backup browser, or a master browser. This is the default for all Windows NT servers that are not domain controllers.

IsDomainMaster
The default setting for all Windows NT computers is False. Setting this value to True assigns the computer a higher election criteria value than it would normally have, giving the computer an advantage in a browser election and causing it to become the master browser if all other computers use the same operating system.

Here's a common problem: Sometimes we can't see all the servers in Network Neighborhood. We know the servers are still there because we can access them by typing \\computername in the Run dialog box. In this instance, the most obvious problem is the Computer Browser service.

We also encounter such problems One very beautiful problem I would like to share with all of you. In our network we found several times that very limited system has been shown in our Network Neighborhood screen.

Even though we set all the tuning parameters in registries still the problem exist. But if we restart any NT machine after a day problem disappear.

After several days struggle we found that a Win98 system is master server but my knowledge didn’t allow me to except this fact we also observed that only those Win98 pc became a master server in which we are not able to disable the browser function.

Here a question arises is that why it happens finally I concluded that because Win98 ed 2 are newer one than NT4 that’s why when user switch on Win98 system it wins a browser election and became a master browser to tolerate with this problem its essential that disable browser function in all Win98 Pcs.

Friday, February 25, 2011

First Blog of 2011 from the ashes of Old Archive of 2002

I started using my old dairies as a rough note book. Somehow a small note (Could be extract from some article in books) immediately got my attention. Although note was captured by me on 23rd Jan 2002 but somehow it was not in my mind. It's quite interesting article which says


If you plan and want to get something done give it to very busy person.

(Golden lesson to be a good manager). Ya it is quite weird but it's effective because more you plan to do more you can get done since you are taking advantage of Parkinson's Law. Law state that

"In part project tends to expand with the time allocated for it"

Just think if you have one task to do in a day it will take entire day to complete that task. If you have three task to do in a day you will finish off all three; and if you have 12 task to do in a day you might be not able to get all 12 done but probably able to finish off 9 tasks.