Wednesday, November 17, 2010

Automate the Autopopulate document Properties in Cells of Excel Sheet

Most of the time we usually forget to update some of the document properties in the excel sheet which are linked with the document prperties like file name and file version if you are using the automated version control system.



Here is the tip to automate that
Open Excel sheet --> Press Alt + F11--> Insert New Module --> Paste Below Mentioned function

Module1
--------------------------------------------------------------
Function CusProps(prop As String)
Application.Volatile
On Error GoTo err_value
DocProps = ActiveWorkbook.CustomDocumentProperties _
(prop)
Exit Function
err_value:
DocProps = CVErr(xlErrValue)
End Function


Module2
--------------------------------------------------------------
Function BinProps(prop As String)
Application.Volatile
On Error GoTo err_value
DocProps = ActiveWorkbook.BuildinDocumentProperties _
(prop)
Exit Function
err_value:
DocProps = CVErr(xlErrValue)
End Function

Now add functions in cells
Function to Insert File Name

=MID(CELL("filename",A1),FIND("[",CELL("filename",A1))+1,FIND("]",CELL("filename",A1))-FIND("[",CELL("filename",A1))-1)

Function to Insert the RCSRevision
=CusProps(“RCSRevision”)

Function to Insert Author
=BinProps(“Author”)

Tuesday, September 21, 2010

Encrypted Phone Calls & Skype Security

        After hearing so many fuss and concern about the blackberry encryption and government agencies concern. This entire episode reminded me one of the risk assessments of 2008 on usage of Skye in corporate network. Although at that time I was biased and already told by superiors that and RA should give enough reasons to stop the Skye Access from corporate network because of some of the corporate concern on the Skype usage but at last the observation and finding was quite amazing. Like to share that finding with all.

As the section in ISO 27001 and PCI DSS covers cryptographic controls states that, when developing a cryptographic policy, consideration should be given to the use of encryption for the protection of sensitive information transported by mobile or removable media, devices or across communication lines. I know many organisations routinely use encryption to secure thumb drives, laptops, emails and instant messaging, but when it comes to discussing sensitive information over the phone, far fewer employ some form of encryption.
Depending on the nature of business, it may be appropriate for some employees to consider using devices developed for the National Security Agency's Secure Mobile Environment Portable Electronic Device (SME PED) program, such as the Sectéra Edge from General Dynamics C4 Systems. Such devices are certified to protect wireless voice communications classified as "Top Secret," as well as access email and websites classified as "Secret."

Encryption devices for landlines are expensive and usually require all parties to have the same kit installed in order to work. And, after all, is anybody really going to tap into your phone line?

        Depending on business, the answer may be yes. Recent stories of industrial espionage and investigative journalism show that eavesdroppers do attempt to listen in on calls regarding certain industries and types of information. So is there an easy and low-cost way to enable encrypted phone calls between colleagues or clients? While there are currently no products for encrypting landline calls that meet that description, Skype provides a free and secure way to make voice over Internet Protocol (VoIP) calls and is well worth bearing in mind as a form of communication for those organisations that want to follow their encryption policy to the letter.

When considering the pros and cons of Skype, take into account that encryption is inherent in the Skype protocol, so it can't be turned off; it is also completely transparent to the user, so there's no chance that he or she can inadvertently disable it. Other Skype features such as instant messaging, file transfer and video conferencing -- which also includes inherent encryption -- may or may not be of interest, but a big plus of using Skype is that calls to other Skype users are free, with cheap rates for calls to landlines and mobile phones. On this what government agencies are going to argue?

Skype security reportedly uses non-proprietary, widely trusted encryption techniques such as RSA for key negotiation and 256-bit AES to encrypt conversations; the technology also uses a proprietary protocol and is closed source. Skype's chief security officer Kurt Sauer has said that there are no backdoors in their software to bypass the encryption on a call, but he has also said that the company complies with all government requests, implying that it might allow governmental eavesdropping when forced to by law, and Skype has never flatly denied that an attacker might be able to intercept traffic. So we've no way of knowing if there is, or if there will be, a backdoor.

But given that users are unlikely to discuss information of interest to the national security services, Skype does provide strong security for most calls. Any eavesdropper would most likely find it impossible to decipher a conversation and, unlike traditional calls, there's no constant circuit between the parties as the voice data is sent via packets switched along thousands of router paths. However, the fact that encryption cannot be turned off and is completely transparent to the user is what makes Skype so appealing from an information security perspective. Encryption, particularly PKI, is notoriously difficult to roll out on a large scale, yet Skype provides easy-to-use encrypted communication for everyone.

For those organisations with a mobile workforce, Skype is also available for various smart phones, providing the same built-in encryption functionality. There is even an iPhone version. However, some network operators do not allow Skype calls to be made over their 3G network s for fear of lost revenue, restricting it to paid-for Wi-Fi use only. Now is it governments agencies are going to ban the Skype and other such products usage?  

Sunday, September 19, 2010

We Crib, We Cry but Why? (Part 2)

In 2009 when I wrote about my experience with BSNL and RTI influence on problem correction it was much appreciated.  Since 2008 I am struggling with my PF (Provident Fund Consolidation issue). Filed an RTI in 2009 and at last got a reply in 2010 almost after a year. But there is some action happened and even though I got some good and some bad replies from different regional PF Departments.  Found that the Regional EPF Hyderabad is very efficient and proactive. But there are certain regional offices like of Gurgaon they didn’t even though acknowledged my query.  But any how I am fighting that battle.  Also observed that the big corporate houses payroll and PF department are much lousy than the Gov. Institutions.  In my case some of the findings are quite shocking and employer are not ready to take the responsibilities of fiasco done by them. 
Enclosing the step by step procedure with some of the templates to get the information from the EPF (Employee Provident Fund) departments.
The key to success is you should know your PF Account Number. Although some initiative started by Gov. To have universal NSSN number which will be the primary key for all PF accounts but that is history and is of no use now.  PF Account number of employees has 4 segments separated by “/” like MH/BAN/xxxx/xxxx or HR/GGN/xxxx/xxxx. Here initial two alphabets represent the State in which account was initially created followed by region three alphabet characters like in this case Bandra and Gurgaon then followed by organization ID  and Followed by employee ID.   If any of the characters are missing please verify if employer is providing all and correct information. If your PF is held by Trust then case is different.

Step 1:- Raise a query (now online is also possible) with regional PF IO to provide you the requested information. On EPF department site you can see the tab labelled as register Grievance. (http://epfigms.gov.in/grievanceRegnFrm.aspx?csession=NYD2j6rTaMz&). Note down the register grievance number
You can raise a query for your account statement; status of transfer request, status of withdraws request and many more.
Step 2:- Wait for 30 days so that RO can respond you on your queries. If you are not satisfied by answer or resolution or even though if you didn’t get any response send a reminder or clarification notice on the registered query. You can check the status of queries by using the View Status tab on main portal. (http://epfigms.gov.in)
Step 3:- Again wait for 15 day after submitting the reminder.
Step 4:- Raise a RTI Request @ http://rti.india.gov.in/index.php. Once online complaint or appeal is filed now you have to collect all the arti-facts in print and duly signed by you.  Documents required to file a RTI are enclosed below.
  • 1.      The signed printed copy of complaint form submitted above.
  • 2.      The Postal Order of Rs10/- in favour of  (Office of Provident Fund Commissioner, New Delhi)
  • 3.      Copies of all the communication and responses received by Regional Provident Fund offices.
  • 4.      A written application to PF Commissioner Template enclosed in this article.   
      Note: Please sign all the documents even though if it’s a printed copy of your mails.

Step 5:- Send it via Speed Post (No Courier) to
CENTRAL PUBLIC INFORMATION OFFICER
Office of Provident Fund Commissioner,
Employees Provident Fund Organisation,
Bhavishya Nidhi Bhawan,
14, Bhikaiji Cama Place,
New Delhi – 110 066.

Step 6:- Update the speed post tracking ID details in your registered grievance.
Step 7:- Wait for 30 more days. (Mine 70 % issues are already addressed 30% is pending for which I am still in this step.  If there is further steps I am going to update the blog either it’s a closure)
Templates:-

Saturday, September 18, 2010

Speed up Linux Disk I/O

        Although I have limited experience in Linux but my Linux exposure taught me that windows 7  bit edition is faster than Linux.  To contradict my statement one Linux geek in Sydney told me that I should fine tune my disk access parameters  to get the better disk I/O in hard disk. To my surprise he was right. Although I am not sure this issue is there with all hardware and all version of Linux or not but in my case that’s a correct and I corrected it. Now my Linux desktop is as fast as my Windows 7.  

        Many Linux distributions are installed in such a way that the 32-bit input/output (I/O) and DMA capabilities of today's Ultra ATA/66 or SATA  hard drives are not fully exploited. By reconfiguring your system, you can get much better performance.
Here I am  giving few steps which you could use to speed up your had drive
To find out if your hard drive is configured for 16-bit I/O,
1.    switch to superuser,
2.      Type hdparm -c followed by a space and the name of the drive (such as /dev/hdc), and press [Enter].
If yours system OUTPUT Is  following, your system is configured to access this drive in 16-bit mode:

/dev/hdc
I/O support = 0 (default 16-bit)

Use the following command to test your disk's speed:
  1. hdparm - Tt /dev/hdc                     (substitute your drive's name for /dev/hdc);
System will display data transfer rate (in MB/sec) for buffer cache and buffered disk reads.


To turn on 32-bit I/O and DMA support, follow the steps which I am giving below :
   1.
hdparm -c 1 -d 1 /dev/hdc (use your drive's device name):

If the command succeeds, you'll see the message:

/dev/had:
setting 32-bit I/O support flag to 1
setting using_dma to 1 (on)
I/O support = 1 (32-bit)
using_dma = 1 (on)

Try the hdparm -Tt /dev/hdc command to see how much improvement you've obtained. If you're happy with the result, repeat this command for additional drives, if any.

To commit the successful settings, use the same command with the -k option, as in the following example:

hdparm -c 1 -d 1 -k 1 /dev/hdc

Because this command is lost when we reboot Linux, we may wish to put this command into a system initialization script, such as /etc/rc.d/rc.local. If you modify this script, be careful that you don't erase any of the existing code!



I am still searching the setting to enable the 64 bit I/O but hot having a 64 bit edition of linux to experiment with. Once I am able to figure out the setting I will update it . But if you are aware please do update me or post it in comment.  

Thursday, September 16, 2010

Grilling the grl’s

We talked about the grl’s (Green Resource Locators) but how these green resource locators are going to work. That’s a subject in this concept shaping article.

Currently data centre infrastructure efficiency is measured as Power Usage Effectiveness most of us know as (PUE of Data-Centres). PUE is a metric of efficiency of data center’s physical infrastructure, whose major elements are Power and Cooling Systems. It quantifies the “Useful Work” performed by the physical infrastructure. It’s very important to mention that efficiency doesn’t measure how much power is being used rather it indicate the magnitude of power being wasted compared to what is being used.

PUE= Total data centre input power / IT Load Power

Means all power not consumed by IT Load is considered as loss which include:
• The internal inefficiencies of the power system (Power path devices such as UPS, PDUs, Wiring, etc) dispersed as a heat.
• All power consumed by the cooling system.
• All power consumed by other data centre physical infrastructure subsystem.




How PUE will help to identify the grl’s

      In certain part of Europe the interactive automated electric meters are used by certain electricity distributers. Some cases these meters are also passing on the energy generated in-house by renewal energy sources back to power system and that is also getting subtracted back from their billing cycle all these data input and calculation is automated. We can utilise such meters to calculate the PUE factor of every data centres and use the PUE factor as one of criteria to rate the green resources. PUE Factor is factor change (+ or -) over a defined period of time.


          Say example in winter of Europe PUE of data centre in Finland is 1.6. While in Summer which is after three months PUE is 2.2. The PUE factor of Finland data centre will be (1.6-2.2=- negative .6) while at the same time PUE Factor data centre in Australia since (Nov-Dec is Summer is some part) is positive .3. That’s means Nov. Dec Finland DC is Green while in Jun-Jul Australia DC is green.

          PUE is not only a factor which should define what is green. But it should also include what’s a % of the energy consumed by these DC’s are generated by renewable source of energy.


         Just wanted to conclude that if any time in future grl’s dream is going to be true that would be my 2 Cents to Mother Nature for our future generations.

Monday, September 13, 2010

Cloud Computing new era in IT Transformation

In every 5 years recession teaches us something and technology transformation become inevitable. I am very fortunate that I witnessed Y2K bug rectification boon, Dot Com burst, ERP Menace, now the technology consolidation in Just 12 Years. Started career from centralised computing era; now designing a solutions to go back on centralised computing in distributed way. What does it means? Yes I am day dreamer and article is imagination of mine based on certain hypothesis. As imagination is mother of all inventions.
I am dreaming that law making agencies will help the adoption of cloud computing. Some of my college in security domain might be thinking that what rubbish I am taking about because some of the security world experts are opposing the cloud adoption putting security and compliance as one of the constrains while I am dreaming that governments should facilitate the cloud adoption. But why?
By the end of 2009 most of the organizations realised that they are not able to meet their financial target. All economists were trying to figure out what could be root cause. In last 5 years the technology transformation happened in a way that technology control is gone out of IT Department hands and now business is driving the IT. Means IT become facilitator to provide the solutions to business. Word impossible is replaced with “what’s a cost”. ERP/CRM Menace happened because technology investment was in hand of IT but now the case is reversed. Technology investment is in hand of business. Then what’s wrong? Here is my observation in 2004-2005 the numbers of equipment required to cater the same set of users got doubled means in 2004 if solution is using “n” number of devices now it’s using “2Xn” for 1000 user base. Yes 04-05 we had a lot of constrain.
That’s not only my realisation but lots of people realised that and they also realised that all the investment is required only for specific period of time. Once systems stabilises the infrastructure requirement get reduced. Separate infrastructures are not required for development, build, test and production environments. But once systems are leased or money is invested it’s invested no one are ready to relook at the comprehensive solutions. I witnessed that there are 100’s of servers in data centre are consuming energy for nothing. Reason support people are too busy or having their own apprehensions for decommissioning of those infrastructures. And then the cost cutting measures came, ya bang on target when it’s required most. Now world should realised what is wastage.
Also new buzz word was quite common in business i.e Cloud Computing. In simple term pay per use; means if it’s required pay for it, If not required then don’t. How it’s helping organization? The lazy IT staffs are no more constrain for decommissioning of infrastructures (Commissioning and decommissioning are and process driven). The costs of running infrastructure if it’s not in use are transferred to service providers. Organizations are least bothered why it’s switched on if it’s not in use. And also there are no time constrains to expand the infrastructure when it’s required.
Then why we are so lazy to migrate to cloud? Here comes the million dollar question? Security and compliance are one of the key factors. Which is pushing back organization? We need assurance frame work which will not only facilitate the cloud migration but also ensure that their modes of operation are in compliant to cater all the regulatory requirements. And hence national agencies can provide the service to the customer can be called TaaS (Trust as a service). Some of the enclosed below paragraph will highlight the resolutions of some of the key constrain areas.
Addressing a Compliance Requirement


None of the regulatory act of world’s standard demonstrate business that where they should store and process their data. They only defines that you are responsible for data of yours and should provide all the essential controls to protect data you are responsible for. Security terminology revolves around the CIA triad. So let’s see how each of the triad pillars can be addressed in cloud environment. Some of the solutions enclosed below are part of my imaginations, white papers, Hypotheses and some of them are learning from the students of UTS during my visit. Students imagine more than us as they are not exposed to the real world work issues like corporate politics, business logics and other.
Address Confidentiality with SSEXS i.e. (Strip, Split, Encrypt, Exchange and Store). Solutions can be easily designed which will utilise more than one cloud service providers to store most confidential data with least chances of data leakage by service providers.
Let’s take two examples where data is stored with cloud service providers one where we are taking the storage space only as a service another we are taking the database application as a service.
First case as soon as databases are accessing the storage space SSEXS is applied that means the intermediatory device or solution which is sitting between the database and STaaS will do the SSEXS.
  1. First it will strip the encapsulation of data (Since with existing technology we are doing the wrapping of data again and again to make them in compliant with latest technology and unnecessarily we are increasing the data offload if you look at all the communication protocol today it uses 20-50 % additional bytes to ensure the original data are intact and reaches to right part in the system these additional bytes could be encryption, address, CRC or anything. Now it’s a time when we should offload all these burdens from actual data set.
  2. Now Split the original data set into number of pieces which should go to original storage service providers (varies from 2 to N depending upon the criticality and essence of data).
  3. Encrypt the splited data set with encryption keys.
  4. Exchange the Encryption keys and Data Set.
  5. Store keys and data with service providers.
Second case where we are taking the complete database AaaS from service providers. As soon as application server at business logic is trying to access the data set stored in databases stored with service provides the intermediatory device will perform the SSEXS.
  1. Strip the data set from their encapsulation cocoon.
  2. Split the dataset which should be stored on the database table of service provides based on some logic. (3 or more)
  3. Encrypt the data sets with encryption keys.
  4. Exchange the encryption keys.
  5. Store the keys and data with service providers.
In the second case till the time all 3 or more parties are not involved together getting a meaningful data is not possible.

Addressing Availability with TDRL i.e. (Time Dependent Resource Locators). Since the inception of concept of content delivery based on the requestor location the resource locators and the dynamic delivery of contents is matter of fascination for me. One of the leading content delivery service providers is Akmai. They created their own logic of resource locators and content delivery mechanism. They call it “arl” akmai resource locators. Since the location barrier and constrains are getting obsolete (as now data can travel faster than light) but we can make it more useful for Mother Nature. I started thinking why we can’t optimise it and make it bit more dynamic. Don’t know in future such logic and delivery mechanism can be called as “grl’s” no it’s not a spelling mistake of “girl’s” its green resource locators.
Based on individual system setting the resources and content delivery will be directed to the location from where green resources are allocated to you for computing. Say for example my system setting is set for green computing then as soon as resource request is passed on the service provider they identify the location where the resources are available on renewable source of energy or having less carbon footprint to make that resource available for you.
Going in depth of concept let’s assume service provider A is having data centre in Canada, USA, Finland, Japan, Australia and South Africa. Data sets are replicated across all the data centres. When user A of USA is making a request to access certain resource in daytime of USA the resource locator is redirecting the traffic to either USA DC or Canadian DC (Based on the requestor system setting, carbon footprint calculation and availability of resources). While if User A is trying to access the resource in night time Data centres of US and Canada are working on minimum power since it’s using the energy not generated by sun light so resources are made available from Australia or Japan where day sunlight is available to convert it in to the energy and provide energy for data-centre operations .
Addressing Integrity Don’t you think that if above two is addressed properly integrity can be taken care automatically. That’s with the help of data redundancy check and encryption/decryption keys. Since data set are stored with more than one service providers and on multiple data-centres. Any accidental loss of data can be easily recovered. Say if during the data replication there is discrepancy in the checks and balance of data set the third data-centre will be referred and the data set which is having similarities in more than two data-centre will take priority and will be referred as a final data set which will be replicated across.
Second since all the data sets are stored in encrypted format with crossover exchange of keys means it’s ensuring the non repudiation. Records can be re-punched not manipulated. Say if there is need to update the record “A”, then entire new record “A-1” should be created with new keys and old record will be discarded and redirection of record “A-1” is provided.
What else can be done to make clouds more green and viable solution?


Government can add the monitory benefit and regularise to have greener approach. Some of them which I am able to think it off are as.
1.) Some kind of tax rebate on the data-centre income of service providers if their energy consumption is getting reduced by 10/15/20 % every year.
2.) Subsidise equipment availability for usage and generation of energy by renewable energy sources in data-centres for cloud infrastructure.
Usage of cloud in this way not only makes our environments and earth green but also going to make word much harmonious. People will be world citizens not only a country specific and countries cross dependencies will increase which will enforce them to be tolerant with each other. Non repetitive work for citizens is going to reduced and many more.
You might be thinking in all these process what’s a role of government agencies? Here comes the trust. National agencies should facilitate the development of such an intermediatory devices and solution. Not only that they should also derive the compatibility standard so that development of cross platform solutions should be easier. Create a certification programs for all cloud service providers to qualify them for storage of different classification of data.
WIIFM for Governments
But why the government agencies should do this and why they should facilitate these kinds of activities. Reason is quite simple; for nation and world interest. It has been proven that cloud migration is chopping off the energy consumption varying from 30 to 50 percent annually. Means low carbon foot print. Some countries can come up as Green Cloud Service Provider Countries (Like Finland and Australia both countries are stressing more on reduction of carbon foot print and trying to get more than 15% of their energy from renewal sources also climatic condition of continent help data centres hosting with less air-conditioning requirement). Some countries (Like Switzerland and Singapore) can help organization operating out of their land to have redundant green and safe data storage which can be retrieved easily in case of disaster and meet the global and different countries regulatory requirement with reduction in their carbon foot prints. Countries like India and other African, Latin American countries can build the ties and have safe, cheap and green data storage and processing capabilities for projects like UID for citizens without wastage of too much non renewal energy sources and many more. But at least start dreaming about it.
Your feed-back on imagination of concepts is highly appreciated. In future any time after 5/10/20 years if you come across with this article for your work or research please don’t forget to post your feedback.

Is really after HIPAA now HITECH act is bothering healthcare professional

Nothing to bother about it first of all some let’s identifies some of the good points of HITECH Act. Under HITECH Act Medicare and Medicaid bonus payout scheme, a physician who can demonstrate “Meaningful Use” of an EMR (Electronic Medical Records) in 2011 would be eligible to receive US$ 18,000 from Medicare for the first year and US$44,000 total through 2015. These incentives will be reduced for adoption after 2012. Physicians whose practice feature a high volume of Medicaid patients can qualify for up to UD$65000 in incentives.

Wow! Isn’t? Quite a good bonus. Let’s figure out what is it ?

HITECH (Health Information Technology for Economic and Clinical Health) Act was signed into law in Feb 2009 as a part of ARRA to protect the PHI (Protected Health information) stored electronically from the potential data breaches. It also help regulators to strengthen enforcement and penalties associated with wilful violation of HIPAA. Guidance is issued specifying technologies and methodologies that render PHI unusable, unreadable or indecipherable to unauthorised individuals. HITECH Act also defines the notification, response and handling of incidents in case of breach detected. Which is after 30 Days and not later to 60 days from the day breaches occurred. Law is applicable for all HIPAA Covered entities, their business associates, third parties including those operating outside the US.
If breach of PHI is affecting more than 500 individuals annually it mandates the media and HHS notification. HITECH Act applies to vendors of personal health record that provides online repositories and to the people who can keep track of their health based on the information present in these repositories. Eg. Application which uses the blood pressure cuffs, pedometer whose reading can be uploaded on online personal health records.

What needs to be done to make my practice in compliant with HITECH Act?
1. Implement Data Classification Policy Approved and communicated by management.
2. Implement a process to detect any potential data breaches and initiate timely incident response activities.
3. Implement the risk assessment and analysis method to identify the significance of risk (financial, reputational or any other harm of potential breaches to the individuals)
4. Implement notification process.
5. Implement policies, process and procedure to file complaint to ensure compliance.
6. Last but not the least Encrypt data at rest and in transit in any form.

For more details refer Federal register part 2, Department of Health and Human Services

Friday, July 09, 2010

Why things are cyclic in this universe ?

Recently I was reading a book by Paul Davies called "The Mind of God". Their is one chapter titled "Can Universe Create itself ?". Although auther is professor of Philosophy and a scientist but I am just a technecition still thought that let's note how I corelate the topic with myself.

Monday, April 26, 2010

Disabling a Camera and Video recorders from Blackberry Bold

Since from last few days we were struggling to disable the BBB Camera feature. searched lot's of forum and discussion board. Most of forum is talking about using a enterprise service. But what if your organization BB enterprise management team will work like a Gov agencies. When it’s too beurocatic that it will take 3-4 months to have this feature disabled via enterprise server. But still being a good corporate citizen and setting up an example you are interested to have this disabled.
Yes you will get suggestion like put a drop of epoxy on lenses, physically remove the camera lenses etc. But if you are taking such steps your device warranty will void. After doing a small research for three days got and easy and affordable way to disable the camera. Yes use following three command and your device camera and video recorders will be out. But remember if any user is doing the software upgrade by connecting a device on computer then you have to repeat these three steps again.

JAVALOADER -u Erase -f net_rim_bb_camera.cod

After shooting this command your device will reboot. Once it's up and running; shoot following command and then repeat the step for third time.

JAVALOADER -u Erase -f net_rim_bb_videorecorder.cod

JAVALOADER -u Erase -f net_rim_bb_mediarecorder.cod


Now be happy camera and video recorder features are disabling. javaloader is the command you can get it from BB Java Developer Tool.

Thursday, February 25, 2010

10 Steps to Achive Successful DLP Implementation

DLP Implementation is having usually 9-12 process steps. Some of the process steps are sequential and some of them can be completed in parallel. They are as:-

1) Identification of what type of solution do we require? (1-3 Months based on the enterprise size and their partners agreement)

There are many different types of products on the market that promise to solve DLP such as hard drive encryption products or endpoint port control solutions. While they may address one of the ways that data loss can occur they do not address the issue as a Content-aware DLP solution will. Content-aware DLP solutions focus on controlling the content or data itself. Some of them are already in use and some of them are in phase of deployment. (Data @ Motion / Rest / Endpoint or Single Channel or Enterprise Wide etc.)

2) Identify information we are interested to protect. (Usually most expensive and time consuming step in entire deployment varies between 6 Months to 2 Years. The success stores and case studies shows that for R&D and Intellectual property protection it takes 5-8 Months, for financial data protection it takes almost 1-3 years and for healthcare and PII information protection 2-5 years). This step has three sub steps (Identification, Discovery and Classification)

Data Identification

DLP solutions include a number of techniques for identifying confidential or sensitive information (Based on metadata or signature scanning) metadata scanning for enforcement is most common deployment technique. Sometimes people are confused with discovery, data identification is a process by which organizations use a DLP technology to determine what to look for (in motion, at rest, or in use). A DLP solution uses multiple methods for deep content analysis, ranging from keywords, dictionaries, and regular expressions to partial document matching and fingerprinting. The strength of the analysis engine directly correlates to its accuracy. The accuracy of DLP identification is important to lowering/avoiding false positives and negatives. For example data sheets used in HR payroll department, Customer data sheet used by operations, company balance sheets, new business work order agreement procedure document etc.

Data Discovery

Means identify where does the data sleep?

Discovering where sensitive data lives are most important when dealing with unstructured data. If data has structure, then locating that data is only necessary for risk assessment. If the data can be detected using structured patterns on a server, file system, document repository, or other system, then that information can be discovered through a loss vector. However, with unstructured data, the information must be located first so it can be identified when it leaks. One particular challenge is file servers. The use of file servers and shares always starts with the best intentions of keeping things organized, but unless the data users are diligent in placing files in the appropriate shares, it will be difficult to identify which shares need to be protected and which can be ignored. Ideally, each group in an organization will have shares assigned around job functions and data classifications.

Document management systems are less of a challenge since they impose a certain degree of organization on their content by virtue of their structure. Browsing through the structure of a data repository and identifying the administrators for various sections should allow Us quickly to discover what documents are sensitive and which are not. Discovery of data poses lots of political challenges as compare to technical. And while defining the discovery of data following points should be considered.

Understand what is practically achievable. Rather than perfection, aim for what is achievable. Like rather than discovering and classifying every piece of potentially sensitive data, we might focus on high-risk data like credit card information, and customer data.

Involve key players early. Involving key stakeholders early in the process increases the likelihood that they will support it during implementation.

Strictly restrict the metadata removal tools in our devices. As DLP usage is increasing more and more metadata removal tools are popping up in the internet domains. Deploy strict control to restrict the usage of metadata removal tools.

Data Classification

Data classification involve very critical role is the success of DLP implementation. Most organization thinks that defining the classification label is more than enough. But organization should consider the classification tools which will allocate the metadata (also called as Meta tags in all the files used in the organization). Once classified meta tags are observed by DLP Controller it will start executing the preventive and informative policies either deny or allow rules can be defined to reduce the processing load on the system. Organization should identify and appoint the designated document management officer who should be able to address the concern regarding the classification criteria and should vouch the ambiguous documents classifications.

Now some of the members might be interested to know which product is available in market to accomplish above three. With my limited knowledge I have seen the Web sense Data Security solutions are available in the Web sense Data Security Suite* which is comprised of four, fully-integrated modules that can be deployed based on customer need:

Websense Data Discover – Discovers and classifies data distributed throughout the enterprise.

3) Establish Why the Content Needs to Be Protected? (A month time of good discussion required by different stakeholders. Second major involvement of Management group) Here we have to define why the identified data should be protected for ex. Is it for compliance reasons or for protection of Intellectual Property? This could change not only how the content is identified but also how it is reported on. For compliance, we have to ensure that we meet not only the data coverage required, like credit card numbers and other personally identifying information (PII) as required for PCI DSS compliance, but also the reporting requirements for the auditing process. This is going to be a critical step in the success of Our DLP solution, so we need to give it the time it deserves.

4) Identify How Data is Currently Lost ( Again a proper FMEA and Risk analysis give better result consider a month time for this process step too. ) Major involvement would be of technical staff. This will help us determine the type of product to use. Is it through email? Is it being uploaded to websites such as Web email or blog sites? Is it the usage of USB sticks on our endpoints? The most important advice here is not to try to solve all possibilities that we can think of for data loss. We have to remember that what we are trying to stop is the accidental loss of data. If we are trying to stop the deliberate loss of data, then that is significantly more difficult and will quite definitely have a serious impact on our business. If the user is resourceful and knowledgeable enough they will find ways to do it. An audience that many companies forget about is the remote user and the devices they use off-site. People will be more bold and daring if they are not in the office of their organization.

5) Technical DLP Policy Creation (Usually varies from 2-8 Months market research shows that consulting firm like Accenture accomplish this is 8 week time while organization like DuPont and Ranbaxy are able to define all the DLP policies in 16-18 week time frame with their DLP Experts and Partners). This is where we get down to the implementation. Once the solution is installed we now look at how we can create policy that recognizes the actual content we want to control and then how it will be controlled. The above steps that have gone through will help us what should be in the policy and how we can prevent the information from leaking out of our organization.

6) Testing (2 Months of regress testing is sufficient). Like any other IT implementation testing is a major factor for ensuring success

7) Policy Communication A step many miss but in our organization I would consider this step a crucial part for being a successful. Employees need to be brought into the project to guarantee success. It will impact their day-to-day functions, so we need to be certain they understand why these controls are in place and support its use. This can be as simple as explaining why we are implementing such a control and what could happen if we didn’t. Obtain their feedback on the controls and how we might minimise the impact on their work.

8) DLP System Policy Enforcement (2 Months sequential from testing ) Now that we have created the policy, tested it and communicated it, time has come to throw the big switch between just monitoring controls to actively implementing them. Don’t turn them all on at once. Prioritise them and release the most important and critical ones first. Ensure we have plenty of coverage to rectify any issues not found in testing as they arise, as this will impact the employees who are trying to do their job. If we are not helpful or responsive, our employees’ support will vanish.

9) Future Proofing for Organization (Ongoing) We have taken the first steps here, but don’t assume our job is done. Look for better ways of classifying content or where different types of content are saved. When new applications or systems are installed, consider how we can implement them to simplify the DLP controls required. Also continue to pay attention to the evolution of our DLP product. Keep it up to date as there will be newer and better ways of implementing the controls we have in place.

Sunday, January 24, 2010

Absolutely Brilliant Interview Answers of Job Hopper !!!

Some time back one of my friend forwarded this mail to me. I was quite impressed with answers and though documented over here. Although one of my HR Manager Told me that "Don't fall in love with company where your working, don't fall in love with superior with whom you are working, fall in love with work which your doing" after 6 years of that learning when I got this mail forward I was thinking again and again abt him. And like to dedicate this to him only.



Some, rather most organizations reject his CV today because he has changed jobs frequently (10 in 14 years). My friend, the ‘job hopper’ (referred here as Mr. JH), does not mind it…. well he does not need to mind it at all. Having worked full-time with 10 employer companies in just 14 years gives Mr. JH the relaxing edge that most of the ‘company loyal’ employees are struggling for today. Today, Mr. JH too is laid off like some other 14-15 year experienced guys – the difference being the latter have just worked in 2-3 organizations in the same number of years. Here are the excerpts of an interview with Mr. JH :

Q : Why have you changed 10 jobs in 14 years?

A : To get financially sound and stable before getting laid off the second time.

Q : So you knew you would be laid off in the year 2009?

A : Well I was laid off first in the year 2002 due to the first global economic slowdown. I had not got a full-time job before January 2003 when the economy started looking up; so I had struggled for almost a year without job and with compromises.

Q : Which number of job was that?
A : That was my third job.

Q : So from Jan 2003 to Jan 2009, in 6 years, you have changed 8 jobs to make the count as 10 jobs in 14 years?

A : I had no other option. In my first 8 years of professional life, I had worked only for 2 organizations thinking that jobs are deserved after lot of hard work and one should stay with an employer company to justify the saying ‘employer loyalty’. But I was an idiot.

Q : Why do you say so?

A : My salary in the first 8 years went up only marginally. I could not save enough and also, I had thought that I had a ‘permanent’ job, so I need not worry about ‘what will I do if I lose my job’. I could never imagine losing a job because of economic slowdown and not because of my performance. That was January 2002.

Q : Can you brief on what happened between January 2003 and 2009.

A : Well, I had learnt my lessons of being ‘company loyal’ and not ‘money earning and saving loyal’. But then you can save enough only when you earn enough. So I shifted my loyalty towards money making and saving – I changed 8 jobs in 6 years assuring all my interviewers about my stability.

Q : So you lied to your interviewers; you had already planned to change the job for which you were being interviewed on a particular day?

A : Yes, you can change jobs only when the market is up and companies are hiring. You tell me – can I get a job now because of the slowdown? No. So one should change jobs for higher salaries only when the market is up because that is the only time when companies hire and can afford the expected salaries.

Q : What have you gained by doing such things?

A : That's the question I was waiting for. In Jan 2003, I had a fixed salary (without variables) of say Rs. X p.a. In January 2009, my salary was 8X. So assuming my salary was Rs.3 lakh p.a. in Jan 2003, my last drawn salary in Jan 2009 was Rs.24 lakh p.a. (without variable). I never bothered about variable as I had no intention to stay for 1 year and go through the appraisal process to wait for the company to give me a hike.

Q : So you decided on your own hike?

A : Yes, in 2003, I could see the slowdown coming again in future like it had happened in 2001-02. Though I was not sure by when the next slowdown would come, I was pretty sure I wanted a ‘debt-free’ life before being laid off again. So I planned my hike targets on a yearly basis without waiting for the year to complete.

Q : So are you debt-free now?

A : Yes, I earned so much by virtue of job changes for money and spent so little that today I have a loan free 2 BR flat (1200 sq.. feet) plus a loan free big car without bothering about any EMIs. I am laid off too but I do not complain at all. If I have laid off companies for money, it is OK if a company lays me off because of lack of money.

Q : Who is complaining?

A : All those guys who are not getting a job to pay their EMIs off are complaining. They had made fun of me saying I am a job hopper and do not have any company loyalty. Now I ask them what they gained by their company loyalty; they too are laid off like me and pass comments to me – why will you bother about us, you are already debt-free. They were still in the bracket of 12-14 lakh p.a. when they were laid off.

Q : What is your advice to professionals?

A : Like Narayan Murthy had said – love your job and not your company because you never know when your company will stop loving you. In the same lines, love yourself and your family needs more than the company's needs. Companies can keep coming and going; family will always remain the same. Make money for yourself first and simultaneously make money for the company, not the other way around.

Q : What is your biggest pain point with companies?

A : When a company does well, its CEO will address the entire company saying, ‘well done guys, it is YOUR company, keep up the hard work, I am with you.” But when the slowdown happens and the company does not do so well, the same CEO will say, “It is MY company and to save the company, I have to take tough decisions including asking people to go.” So think about your financial stability first; when you get laid off, your kids will complain to you and not your boss.

Monday, January 11, 2010

Why Valve replacement give big bucks to Doctors then to Mechanics

A mechanic was removing the cylinder heads from the motor of a car when he spotted the famous heart surgeon in his shop, who was standing off to
the side, waiting for the service manager to come to take a look at his car .

The mechanic shouted across the garage," Hello Doctor!! Please come over here for a minute."

The famous surgeon, a bit surprised, walked over to the mechanic.

The mechanic straightened up, wiped his hands on a rag and asked argumentatively, "So doctor, look at this. I also open hearts, take
valves out, grind 'em, put in new parts, and when I finish this will work as a new one... So how come you get the big money, when you and me
is doing basically the same work? "

The doctor leaned over and whispered to the mechanic.....

.
.
.
.
.
.
.

...
..
..
Doctor said : " Try to do it when the Engine is RUNNING "