• 5 employee cyber security training questions you need to ask

    by Patrick Knight | Mar 15, 2018

    Chances are your organization already addresses cyber security to some extent in new employee onboarding. Whether that’s traditional training videos on cyber security that employees watch on their own time, presentations by IT, or brochures, most employees know that their companies have cyber security protocol and best practices. But how many of your employees actually know what the protocol and practices are?

    In 2016, the average cost of a data breach was $3.62 million. And according to a study by the Poneman Institute, careless workers are the leading cause of data breaches in small and medium-sized businesses. If you want to improve your business’s cyber security, it’s time to get serious about employee education and cyber security training. You can start by asking these questions about your employees’ training:

    1. Is your information relevant?
      Everyone should be familiar with the basics of cyber security, but not all employees need a complete cyber security education. HR professionals, for example, generally have access to sensitive data such as social security numbers and bank numbers, so they will need special training on how to safely handle that information. But to a new marketing team member who can’t even access those SSNs – that security training wouldn’t be applicable. Tailoring your cyber security education to specific jobs will help your employees stay engaged throughout the training – and hopefully remember and implement what was covered.
    2. Is your information understandable?
      The cyber security world is chalk full of jargon. To the average employee the words “Ransomware,” “DDoS,” “patch” and “worm” just don’t have any context when it comes to their job. Not only will they not understand you if you launch into cyber-speak, they might feel unintelligent, and just tune you out. Speak their language, not yours. A Forbes article also suggests keeping your cyber security training short; try a few quick 10-minute sessions instead of an hour-long training. If you break the training up, it will be easier to digest and remember.
    3. Have you told them WHAT TO DO with this information?
      The basics of cyber security are great, but make sure you are sharing how to implement the security measures. Do you want employees to go change their passwords? Tell them some good rules of thumb for creating strong ones. Do you want everyone to update software? Tell them about auto-updates and show them how they can set it up. Giving employees action items turns cyber security from an abstract idea into a goal they can work to achieve.
    4. Do your employees understand why it’s important?
      You know how costly security breaches can be. You know the consequences of employee negligence. So tell your employees. If they see how simple steps to improve their security can impact business operations, they’re more likely to take those steps. All of us are more likely to do something if we understand why we are supposed to be doing it. It won’t bring about 100% compliance, but it will help your employees to know you aren’t making demands just to make their lives more complicated – you’re asking for help in making a real difference in the business.
    5. Have you covered the basics?
      Everybody could use a refresher on the fundamental rules of cyber security. Even if a few employees do roll their eyes, chances are some of them have been using the same password for years – so they really should be hearing it again. In an interview with Fortune, the CEO of the Computing Technology Industry Association said, “Behavior changes really only happen through repetition, follow-up, and emphasis. It takes a long time to instill new habits.”

    If we want to mitigate our employees’ risk, then we need to get serious about how we educate them about information security. If we honestly evaluate our cyber security training methods, we could probably all make some improvements. And that could make a real difference.

  • Why Zero Trust Is Not As Bad As It Sounds

    by Patrick Knight | Mar 01, 2018

    What is Zero Trust?
    “Zero Trust” refers to a network security strategy that calls for all users – internal and external – to be authenticated before gaining access to the network. Zero Trust means organizations never implicitly trust anyone with their sensitive data. Instead of using a blanket network perimeter, Zero Trust networks implement a series of micro-perimeters around data so only users with clearance to access certain data points can get to them.

    It essentially makes sure that users are given the least amount of access possible to still achieve what they need and are supposed to. Zero Trust also means logging all traffic, internal and external, to look for suspicious activity and weak points.

    Why are companies adopting Zero Trust?
    Security breaches are getting more common and more expensive – despite increased security budgets. Zero Trust is more than a software platform; it’s an attitude about users and data. Rather than trusting internal network users and focusing on external hackers, organizations are wising up to the reality of malicious insiders and the need to play it safe by protecting information from all users.

    Security strategies are becoming an important part of the business conversation, and new measures and attitudes are being introduced. In an interview with CSO, Chase Cunningham from the firm who coined the term “Zero Trust,” says that many companies are undergoing a digital transformation. As you move to the cloud, “there’s where you start your Zero Trust journey.”

    Zero Trust isn’t as harsh as it sounds
    The Zero Trust strategy isn’t saying, “no user is safe, ever.” Obviously companies can’t function with that mindset. Rather, it means that when it comes to sensitive data, people should have to prove they are authorized to see it before they’re granted access.

    60% of network attacks are by insiders – three-quarters of which are done with malicious intent. If the majority of network attacks are done by people who are traditionally trusted network users, why not start putting some restrictions on their access? That’s all Zero Trust says to do. It prioritizes privacy by making sure sensitive data is only accessed on a need-to-know basis.

  • 4 reasons why cyber security deserves a larger chunk of your hospital organization’s budget

    by Veriato | Feb 22, 2018

    In the medical community, the patient is paramount. There are countless methods employed to treat people and protect their health. But when it comes to their patients’ safety, most hospitals need a higher dosage of cyber security.

    Currently, health organizations are allocating less than half of what other industries budget for Information Security. This is no longer sufficient for a field with such high-value assets, and many factors play into the need for increased cyber security in the medical arena.

    1. Evolving healthcare technologies: Just in the last decade, health records have gone from mostly paper to totally electronic – and the digitization is continuing. Now employees access patient data via mobile devices and remote networks. Data sharing and cloud storage are necessities. Additionally, many medical devices themselves are now internet-enabled and some providers are embracing wearable tech for patients. Precision medicine, an emerging approach that customizes treatment based on patient-specific factors, also relies on the Internet of Things, and generates more sensitive data. As digital treatments, methods, and devices become more widespread, the opportunities for cyber attacks also increase. The AHA suggests that organizations put a scalable security plan in place now that can grow and adapt with the changing landscape.
    2. Increase in threats: With more online data, come more cyber threats. In 2015, around 100 million health care records were stolen. In 2016, organizations experienced on average one cyber attack per month. The value of EHRs has increased on the black market, enticing more cyber criminals. Organized crime rings target information systems to steal and sell specific information (social security numbers, billing info) or entire EHRs. Political groups and hacktivists seek to expose high-profile patient data to embarrass or discredit their enemies. Nation-state attackers try to seize groups of EHRs for mass exploitation of people. Even your own employees are security risks – from malicious insiders to those uneducated about cyber security best practices. The threats to patient data are diverse, dangerous, and escalating.
    3. Costly consequences: The Poneman Institute reports that the average cost of a data breach for healthcare organizations is estimated to be more than $2.2 million. In another study, 37% of respondents reported a DDoS (distributed denial of service) attack that disrupted operations about every four months, totaling an average of $1.32 million in damage per year. In addition to huge monetary penalties, data breaches hurt organizations’ reputations, which can have ripple effects in business. Intellectual property such as research findings and clinical trial information can also be stolen and sold, negating years of work and monetary investment.
    4. Physical risk: A medical facility exists to help people heal. Even though cyber attacks are online, they can cause physical damage.  In a Poneman Institute study, 46% of respondents said their organization experienced an APT network attack that caused a need to halt services. This shutdown can seriously impact the treatment of patients. Additionally, attacks using Ransomware are on the rise, in which hackers make a network inaccessible until the organization pays a ransom, usually in Bitcoin to make it untraceable. In the meantime, health care records can’t be accessed, meaning treatment may be delayed – resulting in health consequences or even death (and lawsuits). In this day and age, protecting patients means protecting your network. As Theresa Meadows, CIO of Cook Children’s Hospital, said in an interview for NPR: "The last thing anybody wants to happen in their organization is have all their heart monitors disabled or all of their IV pumps that provide medication to a patient disabled."

    Hospital organizations always put the patient first. An important – and undervalued – way to do that is to give cyber security the priority it deserves.

  • 3 ways cyber security is changing business operations

    by Veriato | Feb 15, 2018

    Businesses understand the importance of cyber security, and most are taking steps to ramp up their protection game. In fact, the International Data Corporation has projected worldwide spend on cyber security software, hardware, and services will reach $101.6 billion by 2020. That’s a 38% increase from the $73.7 spent in 2016.

    But cyber security is changing more than just budgets in the business world. Here are three ways companies are changing their business operations and models to improve cyber security.

    1. IT is taking a more prominent place in the Core Business. Gone are the days of a basement IT crowd whose main job was to tell you to try turning your computer off and back on again. With cyber security’s heightened priority, IT is taking a more prominent place in the core business. Hacks can stop business operations, harm corporate image, and of course, cost million of dollars – proving cyber security is way more than an IT problem.

      IT departments are starting to align security spending with business objectives, proving that security isn’t a cost; it’s an investment. Savvy business leaders rope their tech team into operational planning, using the department as a business partnership to achieve goals. If a business is serious about succeeding, they’re getting serious about cyber security.
    2. Regulations are rising. The first compliance date for the New York Department of Financial Services’ cyber security regulation was last August. This legislation was the first of its kind in the nation, requiring financial institutions to report attempted data breaches, hire a CISO to handle employee cyber security, and enforce their third-party providers to improve security as well. Though not as extensive as the New York regulation, 42 states introduced 240+ cyber security bills or resolutions in 2017 according to the National Conference of State Legislatures

      On a global scale, China and Singapore have similar regulations, and the EU adopted extensive regulation with the GDPR in 2016. Many of these bills impact not just native businesses, but companies who do business in that country. With the increase in regulation, businesses need to change structures, communications, and policies to stay compliant. Companies need to invest in a robust legal team that can handle managing the upcoming regulations that will affect their operations.
    3. Subscription software is the new norm. By 2020, more than 80% of software will be sold via subscription, rather than the traditional model of licenses and maintenance, according to Information Week. This drive makes sense from a bottom line perspective, but also from a cyber security perspective. The longer the same software is in use, the more time hackers have to expose and exploit its vulnerabilities. With a subscription model, the software is always current, making it more secure. 

      Thanks to the cost-saving benefits of subscription software, businesses can use the extra budget room to implement more cyber security measures or invest in new data protection services. IT is embracing the subscription software model, and it’s having rippling effects across the entire business.

    As cyber security becomes more of a concern, businesses are changing to prioritize it. Spending is adjusted, objectives are aligned, and services are adapted to keep businesses secure, and therefore more successful.

  • Technical safeguards for HIPAA at the administrative level.

    by Veriato | Jan 25, 2018

    This is the 3rd post in a 3-part series on HIPAA data security.  Here we discuss ways Veriato can assist organizations reduce the cost associated with HIPAA compliance reporting while increasing data security.

    Requirement 164.308

    Administrative Safeguards

    Veriato acts as a core part of your implementation and maintenance of security measures and administrative safeguards to protect patient data, specifically around monitoring and reviewing the conduct of you workforce in relation to the protection of patient data.

    Below are some examples of how Veriato can assist in addressing some of HIPAA’s Administrative

    • Risk Analysis (Required) § 164.308(a)(1)(ii)(A) – Veriato’s visibility into how users access, interact with, and use patient data can be utilized to assess the confidentiality, integrity, and availability of patient data, regardless of application used.
    • Information System Activity Review (Required) § 164.308(a)(1)(ii)(D) – By providing per-user activity detail and reporting, Veriato supplies the most comprehensive and contextual activity review possible, showing when patient data is access, as well as the actions performed before and after the access in question.
    • Log-in Monitoring (Addressable) § 164.308(a)(5)(ii)(C) – Veriato facilitates the monitoring of and reporting on log-ins which can be used to identify suspect activity.
    • Response and Reporting (Required) § 164.308(a)(6)(ii) – In cases where the suspected or known security incident involves a user’s application-based interaction with patient data, Veriato provides the activity detail necessary to document the security incident and outcome in almost.

    Requirement 164.312

    Technical Safeguards

    Veriato’s advanced user activity monitoring and behavior analysis technology can be leveraged to define advanced policy and procedures designed to establish and ensure patient data remains protected giving you HIPAA technical safeguards at the highest level.

    Below are some examples of how Veriato can assist in addressing some of HIPAA’s Technical Safeguards:

    • Audit Controls (Required) § 164.312(b) – Veriato not only empowers security teams to record an examine user activity within systems containing protected patient data, but also within any other application, providing unmatched visibility into actions taken around patient data access.
    • Mechanism to Authenticate Electronic Protected Health Information (Addressable) § 164.312(c)(2) – Because Veriato records and can playback all user activity involving protected patient data, it provides the ability to demonstrate that patient data has not been altered or destroyed in an unauthorized manner.

    Requirement 164.414

    Administrative Requirements & Burden of Proof

    In an organization’s time of need, when demonstrating either HIPAA compliance – or the lack thereof – is necessary, the determining factor will ultimately be the answer to the question “Was patient data improperly used?”. This will require an ability to review the exact actions taken by one or more users, both within and outside of an EHR application.

    Below are some examples of how Veriato can assist in addressing this HIPAA requirement:

    • Administrative Requirements § 164.414(a) – Veriato’s ability to record, playback, and report on detailed user activity can help demonstrate compliance with the Safeguards portion of the Administrative Requirements § 164.530(c).
    • Burden of Proof § 164.414(b) – In the event of a suspected breach, Veriato uniquely facilitates the playback of specific user activity to either demonstrate the lack of a breach, or to help define the scope of one.

    Requirement 160.308

    Compliance Reviews

    Whether as part of suspected violation or other circumstances, compliance reviews of administrative provisions around appropriate access to, and usage of, patient data can be simplified by demonstrating enforcement of policies and procedures through Veriato’s activity reports and activity playback.

  • Security concerns and solutions for staying HIPAA compliant

    by Veriato | Jan 23, 2018

    HIPAA Security Challenges for Key Stakeholders

    While HIPAA itself isn’t broken out into separate objectives for each stakeholder in the organization, stakeholders each have different needs around the goal of adhering to HIPAA:

    • CEO – Needs a proactive approach leveraging people, processes, and technology that ensures adherence to HIPAA requirements around safeguarding patient data.
    • CFO – Can’t afford the cost of a breach in compliance. Would rather spend budget on preventative measures, than on responding to a breach.
    • CCO – Wants a plan in place of how to easily and quickly demonstrate
    • CSO – Desires for patient data to remain secure, and a way to know patient data isn’t being misused.
    • IT Manager – Needs to provide a means of visibility into exactly how patient data is used, regardless of application.

    What’s needed is a technology that cost-effectively addresses HIPAA security challenges and requirements directly by monitoring the access to patient data, aligning with established policy and processes, providing visibility into how patient data is used or misused, and providing context around either demonstrating compliance or determining the scope of a breach.

    How Veriato Helps Address HIPAA Security Challenges

    Veriato helps organizations of all kinds satisfy their HIPAA obligations by offering technical solutions through detailed, contextual, rich logging of all user activity – both inside an EHR as well as any other application – combined with robust screen recording and playback. This level of visibility into user interaction with patient data provides comprehensive evidence for compliance audits. Activity data is searchable, making it easy for an auditor, security teams, or IT to find suspect actions, with the ability to playback activity to see before, during, and after the activity in question. Reports can be produced in minutes – typically a fraction of the time needed – and don’t require pulling critical resources from other tasks.

    Veriato assists in meeting a number of specific requirements, leveraging its deep visibility into user activity to provide context around access to patient data, showing what was accessed and what was done with the data. 

    In our next blog post, the last of a three part series, we will walk through a few of these requirements and illustrate how Veriato helps further address some of the HIPAA security challenges faced today. 

  • Expert advice on HIPAA data security

    by Veriato | Jan 18, 2018

    The biggest challenge in ensuring HIPAA data security is people.

    At its core, HIPAA compliance is simply about maintaining patient privacy by ensuring the appropriate access to and use of patient data by your users. Electronic Health Record (EHR) solutions provide detail around when patient data is accessed, but without visibility into what users do with sensitive patient data after they access it, the risk of data breaches, compliance violations, and the investigations, fines, and reputational damage that comes with them, is significantly increased. 

    Organizations seeking to meet HIPAA requirements for data security and technical compliance are expected to demonstrate proper use of patient data through appropriate administrative and technical safeguards. While most organizations focus their efforts on implementing safeguards that revolve around an EHR system already designed to be HIPAA compliant, today’s computing environments facilitate the ability to repurpose accessed patient data in an unauthorized fashion, quickly, easily, and conveniently.  Webmail, cloud-based storage, USB storage, web-based collaboration tools, and even printing are just some of the ways users can improperly save, steal, and share patient data – making the watching of activity only within an EHR a shortsighted strategy, if the goal is to truly be able to demonstrate compliance.

    The penalties for a HIPAA data security breach are severe – ranging from hundreds of dollars per record, up to $1.5 million, depending on the tier of the infraction. Avoiding these penalties depends solely on an organization’s ability to ensure proper controls concerning HIPAA technical compliance are in place, and that access to patient data is properly secured.

    HIPAA Tier

    So, what’s needed is a means to have complete visibility into every action performed by a user with access to patient data – every application used, webpage visited, record copied, file saved, printscreen generated, and page printed. Only then will a covered entity truly know whether patient data has been appropriately accessed and used.

    But, compliance to HIPAA isn’t just a technical battle; it’s one filled with policies and procedures that, in conjunction with technology, ensure users are trained, access to patient data is correctly granted, use is appropriate, and compliance can be demonstrated.

    In the following 2 blog posts, we will discuss challenges to key stakeholders and ways that Veriato can help address HIPAA data security and technical compliance challenges.
  • Defense Against Enemies, Foreign and Domestic

    by Patrick Knight | Nov 09, 2017

    “I, _, do solemnly swear (or affirm) that I will support and defend the Constitution of the United States against all enemies, foreign and domestic; that I will bear true faith and allegiance to the same…”

    This is a portion of an oath I took many years ago when joining the United States Army and afterwards in government service. Its origins point to Article VI of the US Constitution and is codified in Article 5, US Code 3331 as an oath of federal service.

    Although I left government service nearly twenty years ago, I had never given the entirety of the wording of this oath much thought. It is easy for most people to visualize defending against foreign enemies. It is against external known and unknown threats where we devote most defense assets in nearly all aspects of life.

    The inclusion of defense against domestic threats in this oath points to our Civil War and the desire to keep intact something that is fragile and worth defending against internal attempts to break our union apart – to protect against insurrection or destruction from within.

    The service I gave willingly undeniably led to my career fighting against cyber threats. In fact, I have spent my entire adult career in a security profession in one manner or another and continue to do so to this day.

    This weekend our country will celebrate our veterans who served mainly to defend against foreign enemies but who stood ready and swore also to protect against domestic threats as well. It is my sincerest desire that we must defend against neither but that we are prepared to defend against both.

    I want to thank our veterans and those currently serving their country. In an uncertain and dangerous world, that service is something to be proud of and very much needed.

  • Malware Evading Some Antivirus Using Invalid Certificates?

    by Patrick Knight | Nov 03, 2017

    Many antivirus and endpoint security technologies fight a two-front battle. On the one hand, they must block malware threats from executing on the system. On the other hand, they need to avoid falsely detecting legitimate software so they don’t cripple the system or their users’ abilities to use valid software.

    One technique antivirus scanners may use to avoid blocking legitimate software is to trust files that are digitally signed by certificates that the security software trusts. For example, most executable files distributed as part of a Windows installation by Microsoft would be digitally signed by a Microsoft certificate. As new versions of software are released through updates or patches the antivirus scanner might check and skip the file thus preventing falses as long as the file is validly signed by the trusted certificate.

    A study by University of Maryland Computer Science students, “Certified Malware: Measuring Breaches of Trust in the Windows Code-Signing PKI” observed that many unsigned ransomware files that were detected by major antivirus products were no longer detected once invalid digital certificates were appended to the files. The authors believe that “this is due to the fact that AVs take digital signatures into account when filter and prioritize the list of files to scan(sic)…”

    This may be correct. It implies that the antivirus scanner isn’t verifying the validity of the certificate on the file and is trusting merely due to its presence in the file. The authors don’t state if the resulting ransomware evasion was verified to be due to a digital certificate trust in the PKI model.

    Another possibility is that certain malware detections are specific to exact file hashes (e.g. MD5, SHA256) and thus a modification to the file in the slightest bit – such as appending an invalid X.509 digital certificate to the file – alters the file’s hash and thus will also potentially break the detection.

    Either way this does highlight an antivirus evasion technique. Files that contain an invalid digital certificate for a variety of reasons are still allowed to run on a Windows system and the user in most cases would be unaware. The one major exception would be native system drivers which operate in kernel space and are required to have a valid and trusted digital certificate in order to execute.

    I altered a tool of my own (not malware) by copying a digital certificate from another valid file and setting the fields in the file header to recognize that certificate structure. A certificate validation check of the file resulted as invalid (TRUST_E_BAD_DIGEST) but the tool still executed with no errors.

    Had this been a detected piece of malware, it is possible that the malware would still execute but no longer be detected unless the antivirus rule was more generic. Generic signatures against many modern malware families are difficult to create due to the sophistication of techniques used by malware authors to evade antivirus detection. Detections against many known variants of malware are often very specific. This is how ransomware and other malware often will still get though the strongest of endpoint defenses.

    This type of antivirus evasion is not new but does illustrate how modifying any piece of malware in a way that doesn’t affect its original operation can result in its undetected reuse, signed or not.

    If an antivirus product is trusting digitally-signed files with invalid certificates, this has additional ramifications. Malware could trivially append a Microsoft, Adobe or Oracle certificate and masquerade as legitimate software with impunity. For most antivirus products, the evasion or change in detection may only be the result of a change in the file’s hash after the modification and unrelated to digital certificates specifically.



    Hackers abusing digital certs smuggle malware past security scanners


    Certified Malware: Measuring Breaches of Trust in the Windows Code-Signing PKI



  • Insider Threats Are the Greatest Risk to Your Data—Here’s How to Stop Them

    by Veriato | Oct 23, 2017

    From an article by Stephen Voorhees, CISSP and Senior Sales Engineer at Veriato, published on SmallBusinessToday.com:

    Most companies have already hunkered down to prevent hackers from stealing proprietary data. Their security teams have almost certainly installed powerful firewalls. Some companies may have acquired robust security systems to protect themselves against ransomware, the malicious code that cyber criminals use to encrypt your data and hold it hostage until you pay a hefty ransom.

    The trouble is, there’s a far greater threat to your company’s data from people inside your organization.

    To read the full article, click here.
  • U.S. Elevates Cyber Command to Combatant Status

    by Patrick Knight | Aug 30, 2017

    On August 18, the United States Cyber Command was elevated from a subordinate component of the NSA to that of equal status with other combatant commands such as USSTRATCOM (U.S. Strategic Command), USSOCOM (U.S. Special Operations Command), and USCENTCOM (U.S. Central Command).

    This substantial move – originally proposed by former President Obama – is long overdue and recognizes the enormous importance of protecting the U.S. from cyber attacks by foreign adversaries attempting to disrupt the U.S. government, military, infrastructure and industries. Responses to attempts by foreign agents to spread ransomware, disrupt critical infrastructure, hack servers and databases or spread disinformation designed to confuse or negatively influence public opinion in the United States will now fall under a command which has the same seat at the table as a command that deploys Special Forces units worldwide to fight terrorism.

    A “combatant” command is distinguished by being comprised of more than one military branch and receives full funding and support commensurate with its area of responsibility to complete its mission. In other words, it is not marginalized but has the authority to execute its mission and is adequately staffed and funded.


    Where is your cyber command?

    Whether with national security or your enterprise security, cyber security should not be marginalized on the sidelines. Whether your industry is in the financial sector, public health sector, education, government agencies or defense contractors, you have much at risk from cyber threats and the risks are growing. A 2017 survey of 1900 cyber security professionals from these and other major industries shows that the three major cyber security concerns for enterprises are email phishing attacks, insider threats and malware.

    Take a look at your enterprise. What data do you stand to lose? Are you prepared to react to an internal or external data breach? A security strategy must first recognize what damage could occur from an external or internal attack. This includes downtime due to a denial of service (DOS) or other external attack, loss of intellectual property (IP) or customer data from internal or external threats and loss of data due to ransomware, advance persistent threats (APT) and other malware.

    You must make a full evaluation of which resources you have available and a plan to address resources that are still needed to fully protect intellectual property, customer data, employees and other users. You must have an incident response plan to react to any breaches of security and exercise it.


    What is your cyber strategy?

    The security model you enact must appreciate the great risk to your enterprise today and your ability to respond and recover. The emphasis you place on who in your enterprise governs your security strategy and at which level this responsibility lays will say a lot about your readiness to deal with a breach when it happens and the importance you place on protecting IP, customer data and other sensitive information.

    Any modern enterprise should have their own cyber command: an information security organization and a response plan with a scope and necessary authority to impact other organizations.



    Wired: The US Gives Cyber Command the Status It Deserves

    Veriato Cyber Security Trends 2017


  • Additional Insight into Quantifying Insider Risk

    by Veriato | Jun 29, 2017

    From an article by Veriato's CSO published on infosecurity-magazine.com:

    Never before have there been so many platforms that let a growing number of people touch, manipulate, download, and share sensitive data.

    But there’s a dark side to all that access: It exposes a company to malicious intent and theft of information worth thousands, sometimes millions, of dollars. More alarming is the fact that less than half (42 percent) of all organizations have the appropriate controls in place to prevent these attacks, according to the Insider Threat Spotlight Report.

    To read the full article, click here.

  • Why Trust is not Enough

    by Veriato | Jun 15, 2017

    Typically “insider threats” are defined as individuals with malicious intent: the employee who was passed over for a promotion, the developer who insists that code she was paid to develop belongs to her, the contractor who installs malware on the POS system, and so forth. However, there is another group of potential insider threats. These individuals may not have malicious intent and may be quite loyal to the company, its strategy, and its future success.

    They are in a position of trust within the enterprise. From a cyber security perspective, however, the unfettered access these individuals have to some or all of the company’s sensitive cyber assets is cause for concern. Consequently, these individuals are in what may be defined as “high-risk” positions. Not that the company has a reason to be concerned about the intent, motives, or loyalty of these individuals under normal circumstances.

    However, it is possible that the access these people have to high-value and critical assets may be used in ways other than for the intended company purposes.

    Trusted insiders may use their access to satisfy their curiosity. Imposters may steal their authentication credentials. It may even be possible that these people may be placed under excessive duress – such as from a credible threat of physical harm against family members. Even if individuals in high-risk positions remained loyal and dedicated to the company, attackers could leverage their privileged access such that the company could be made to suffer irreparable harm.

    Furthermore, malicious intent is not the root of insider threats. Consider, companies necessarily need some individuals with elevated system access to perform certain roles. The individuals in these high-risk positions are necessarily entrusted with access to valuable cyber assets – and most of these individuals perform their regular duties with loyalty and dedication to the company.

    Surprisingly, though, these same people through simple negligence cause 68% of insider incidences. Intent is not the root of insider threats – authenticated access to assets is.

    For more on this topic access our free whitepaper

  • How to Prevent Departing Employees from Pocketing Your IP

    by Veriato | May 08, 2017

    When George, a senior salesman, started his current position 10 years ago, he brought hundreds of business cards and notes that he added to his employee’s customer management software. When he decided to leave, he thought he could take the current database with him. Not true. 

    “While George may believe that he has a legitimate claim to the customer information because he brought in hundreds of new names and personally worked cultivating those and other relationships for a decade, all the information in the CRM belongs to the employer,” David A. Smith, a CISSP, wrote in a recent whitepaper, How UEBA mitigates IP Theft By Departing Employees. “George’s transfer of his current employer’s valuable and confidential digital assets is theft.”

    While this particular incident is hypothetical, similar situations – whether inadvertent as George’s situation was – or deliberate – happen far too often. For example, a recent security survey showed that 87 percent of departing employees take data they worked on, including confidential customer information, price lists, marketing plans, sales data, competitive intelligence, etc., and 28 percent take data created by others. The loss of this intellectual property (IP) can be devastating.


    So, what steps can you take now to prevent this type of theft when employees decide they’re ready to move on?  

    Establish – and Enforce - Corporate Policies: While some employees who share proprietary data with outside sources or take it with them to their next place of business might do so maliciously, others might simply be unaware that it doesn’t belong to them. Having a strong, plainly written Confidentiality and Intellectual Property Agreement in place can help to alleviate the gray areas that exist when employees involved in the creation of IP perceive they have an ownership stake in it; reviewing that Agreement with an employee when they are departing acts as something of a deterrent against IP leaving with them. (See the white paper, 3 Steps to Protect Your Data During The High Risk Exit Period, at http://www.veriato.com/docs/default-source/whitepapers/3-Steps-to-Protect-Your-Data-During-The-High-Risk-Exit-Period.pdf)

    Ensure that the confidentiality and IP agreements outline what data employees can take with them when they leave and what needs to stay behind, as well as any consequences for its removal. And ensure that the document is written in terms that people who do not work with legal contracts as part of their everyday role will readily understand. 

    Monitor Behavior:  While it’s possible for humans to get an idea of when changes in an employee’s behavior might indicate an increasing probability of IP theft, it would be “impractical, if not outright impossible, for an organization’s cyber security staff to observe and monitor each employee,” Smith wrote. Instead, companies should implement technology such as user and entity behavior analytics (UEBA) – with advanced machine learning algorithms - to help define what is normal behavior for each user so any anomalies will be easier to detect and investigate. UEBA compares each user’s real-time activities against their recorded behavior baseline and alerts the designated response team (likely cyber security) so it can investigate more closely. When coupled with user activity monitoring (UAM) software, security can see if the employee is emailing or otherwise transferring data he doesn’t normally transfer, is downloading lists onto external devices or is logged into the IT server at 2 a.m.

    To help with this process, the insider risk team should quantify employee risks, giving employees a score of 1 to 10. For example, some employees may have a low score, meaning they do not need to be monitored as closely because they do not have access to as much proprietary information, and higher level executives (even security itself) a high score, meaning they should be monitored more closely. When employees tell their managers or HR that they’re planning to leave, the risk score should be set to 10, triggering a review of 30 days worth of online and communications activity. The 30 days leading up to notice of resignation is the ‘high risk exit period’ during which IP is most at risk.

    Limit Data Access: Only give employees access to data they need to do their jobs. This will keep them from accessing other corporate information, and according to Smith, “in most cases it will also prohibit the installation of any hardware or software that can be used for the exfiltration of data (i.e. being able to create CDs or DVDs, or to copy data to a thumb drive).” To prevent users from transferring data they shouldn’t be, the organization should also consider configuring firewalls that block malicious websites or those which can be used to transfer data, and encrypting all data at all stages of storage and transport - and require user authentication to utilize encrypted data.

    Fortunately, having strict policies in place, and communicating these policies (and any consequences for breaking them) will deter many departing employees from taking data that doesn’t belong to them. However, being able to analyze employee actions and behavior, detect whether any anomalous behavior poses an actual threat, prioritize which behaviors might be most damaging to a company, and then respond appropriately, could be even more critical to preventing valuable IP from leaving when your employees do.  


    Cyber security expert, author and speaker Derek A. Smith (CISSP) and Nick Cavalancia from Techvangelism will be hosting a webinar on this topic on May 11 at 11 a.m., EST. http://www.veriato.com/lp/webinars/how-ueba-mitigates-ip-theft-by-departing-employees   

  • To cloud, or not to cloud. That is the question.

    by Stephen Voorhees, CISSP | Mar 21, 2017

    If you are thinking about storing sensitive information in the cloud, you need to be as sure of the security of that data as you would be storing it on your own infrastructure. In effect, you are outsourcing data storage. And there are good, valid reasons to do so. Most of them stem from a lower costs (or the perception of lower costs) and management overhead.

    Here is a list of questions you need to have answers to before committing to a cloud based service.

    Physical Security

    • What access controls are in place at the data center?
    • Is the data center SAS70 certified?
    • What are the processes and procedures around physical access to the servers where your data is stored?
    • Who is allowed access?
    • How are they vetted from a security perspective?
    • What background checks were performed?
    • How is the staff that has access monitored?

    If the provider you are thinking about trusting with your data is serious about security, they will be able to produce a document that speaks to this without hesitation.


    • What happens if another customer in the shared environment overuses their capacity?
    • What are the impacts to you?
    • What guarantees are you offered that your performance will not be impacted?
    • What logical security exists to ensure that no one else besides you (and the people at your outsourced provider) can access your data?
    • What encryption is used when the data is in motion?
    • What encryption is used when the data is stored in their data center?
    • What auditing exists to you can look and see how your data is being accessed, and in the worst case, how a breach occurred?
    • What disaster recovery options are offered?
    • What is their Recovery Time Objective (RTO) to restore your data in event of a hardware failure?
    • What is their Recovery Point Objective (RPO) that measures their tolerance for data loss, and is it an acceptable level for your company?
    • Who has access to the backups?

    A quality provider will be able to provide detailed documentation that addresses these questions without hesitation.

    Veriato supports private cloud deployments, and encourages our customers to be certain they have addressed the above should they consider deploying our technology into a shared cloud infrastructure. While many of our customers elect to deploy using a private cloud, routine surveying of our customers – particularly those in financial services, healthcare, pharmaceuticals, and manufacturing (area where compliance mandates require greater control and where the value of corporate data is fully understood) tell us that an on premise deployment remains their preferred approach.


  • Don’t Be Held Hostage By Ransomware. Stop The Attack Before Critical Damage Is Done.

    by Mike Tierney | Mar 06, 2017

    Ransomware, a type of malware that encrypts your critical files until money is paid, continues to wreak havoc on organizations worldwide. In fact, studies show that more than half have experienced a ransomware attack, and it takes them an average of 33 hours to recover lost data—with only 23 percent of companies completely recovering their lost data. According to researchers, the spread of these attacks were projected to cost companies $1 billion dollars in 2016 alone!

    To combat the issue and prevent these potentially astronomical costs in resources and undue stress, Veriato today introduced its new RansomSafe solution. RansomSafe actually detects and stops ransomware attacks on file servers as they occur.

    How does it work?

    • With a robust database of known variants combined with deception-based techniques to detect unknown variants, RansomSafe is able to detect an attack before it succeeds in encrypting the organization’s critical files. It also blocks the user account attempting to encrypt files and make changes to the file system, shutting down the attack to prevent further encryption attempts and to minimize the restoration effort required.
    • RansomSafe backs up a company’s files immediately before they are changed, making copies of the latest version of the files and storing them safely away, out of reach of the attack.
    • Once the attack is disrupted, the most current versions of any affected files can be recovered in minutes with just a few clicks.

    As a result, companies save significant time, money and undue stress associated with ransomware attacks.

    According to Veriato CSO David Green, “This vital layer of defense can be installed in minutes and costs far less than a ransom payment. RansomSafe saves the time and money a company would spend in reclaiming their systems and restoring files, and offers companies peace of mind knowing these attacks will detected and stopped before they really start!”

    Learn More About RansomSafe
  • Step 5 of 5 to Quantifying Insider Risk

    by Mike Tierney | Feb 14, 2017

    Address Risk During your Termination Process

    One of the best practices found in the Common Sense Guide to Mitigating Insider Threats – a document written well ahead of its time by the world-renown CERT division of Carnegie Mellon University’s Software Engineering Institute (SEI) – is the need to develop an employee termination process that takes into account the threat a departing employee can pose.

    Whether being terminated or leaving on their own accord, the exit period poses one of the highest risk timeframes to an organization. Loyalties quickly shift from the organization to the individual, and thoughts move from responsibilities to their soon-to-be “former” employer to a focus on the next job and its’ requirements.

    To mitigate insider risk during this high-risk exit period, two processes must be put in place – one to address an employee that is being involuntarily terminated, and another to address a voluntary termination (resignation) involving a notice period. It should be noted that this guide touches on steps normally taken by HR. However, this guide is strictly focusing on those steps that help to mitigate insider risk and, therefore, should not be misconstrued as presenting a comprehensive termination process.

    While having very similar steps, they should be considered separate processes to ensure service levels are properly defined and met when put into action.

    Involuntary Termination

    This involves a situation where an employee is being laid off or discharged. Since in most cases this is not a pleasant separation, the assumption is that the employee’s loyalties will quickly diminish to zero, putting the responsibility of ensuring confidentiality and the security of organizational data and resources firmly on members of the Security team. The process should begin the moment the decision is made to terminate employment, and will include one or more of the following tasks (depending on your organization):

    • Notification of desire to terminate – This is the first step in the process where internal management notifies HR (or equivalent) of the need to terminate an employee. This needs to be done as soon as the decision is made.
    • Notify IT of termination and employee last day – IT needs to be informed that an employee’s access will need to be revoked, and when to revoke it. This also should initiate an audit around any organizational assets currently possessed by the employee.
    • Conduct a 30-day Activity Review – A review of the last 30 days of the employee’s online and communications activity must be conducted. In many cases, involuntary termination isn’t a surprise to the employee and loyalties may have shifted weeks prior, giving the employee ample time to exfiltrate data, etc. This 30-day period has been shown to be the time when a great deal of IP and other confidential information are taken. Completing a comprehensive review without the type of detailed information that UAM can provide in a single pane of glass can be difficult, but if you cannot convince your organization to provide you with purpose built tools, make best efforts using what you have – but be sure to let management know you are giving them a report based on what you had available to you
    • Notification of any inappropriate activity found – Should any questionable, or unquestionably inappropriate actions be found during the activity review, HR and Legal should be notified. The actions found may have consequences on how the termination itself will proceed.
    • Employee notification of termination – This begins the actual process of terminating employment.
    • Review of Signed CIPA with employee – One of the first steps in every termination, the CIPA should not just be presented to the employee, but reviewed, explaining the obligations this document lays out – that the employee has previously agreed to. At this point, it is also prudent to mention that the employee will be asked to sign a Certificate of Return and Destruction from the employee prior to leaving.
    • Terminate Access – While notifying the employee and reviewing the CIPA, access to all data, systems, applications, and resources should be terminated by IT.
    • Return or destruction of company property – All company property should be returned and all company data (in all possible forms – printed, in email, stored in files on a USB drive or cloud storage, etc.) should either be returned or destroyed.
    • Obtain signed Certificate of Return and Destruction – The employee is asked to sign the legally binding document, indicating they are taking, nor have access to, any company data – whether confidential or not – with them once they leave the organization.

    Each task should have a responsible role or individual and a service level timeframe assigned. This way expectations are communicated to each person involved regarding expected response times. The timeframes will vary, based on risk scores and perceived immediacy, noting that exceptions to these will occur. Some tasks require another role or individual to be notified; this should also be documented with a given task, when appropriate.

    When an employee leaves of their own volition, this process begins the moment notice is given (one of the differences between this and the involuntary termination process). Voluntary termination can also be initiated by an employee no longer showing up for work a designated number of days without providing any notice, in which case, the process begins based on HR’s definition of Job Abandonment.

    The process for a voluntary termination is very much like that of the Involuntary Termination, with a few task exceptions:

    • Employee provides notice – There is an assumption (putting job abandonment aside) that the employee will provide notice, kicking off the termination process.
    • Notice period determination – The organization needs to decide whether they wish to accept the notice period, or modify it.
    • Conduct a +/- 30-day Activity Review – Should an employee’s notice period be accepted, their activity from the date of notice to the date of employment termination should be reviewed in addition to the 30 days prior to the date of given notice.

    Another difference will be the timeframes for each task. For example, the review of the CIPA should happen on the day of notice given, rather than the day of termination – as in the case of the involuntary termination. Lastly, service levels may also differ – such as notification of termination to IT. In the voluntary termination scenario, IT should be notified the same day notice is given, but immediately during an involuntary termination.

    See Guide Essentials: Risk-Lowering Termination Process – Use this document as the basis for defining termination process roles, actions to be performed, assignments of actions, and service levels to be met.

    While the entire process has been simplified down to just 5 steps, determining where to begin can be pretty daunting. Do you need to start scoring every position within the organization? You already have a job to do, so it’s unlikely you could even if you needed to.

    In reality, the most important part of where to start is simply starting. Begin with any open positions that are being filled by HR – these will be filled by people you know the least. Score those positions, along with a few positions you know should be of higher risk as a point of reference. Once you have those completed, you can begin to profile positions you know represent an insider risk (just not how much) – those that daily interact with confidential data, intellectual property, customer data, and the like - and begin to build out a comprehensive set of positional risk documents.

    Even if you don’t have a UAM or UBA solution ready to implement, quantifying insider risk at least gives you some perspective on how big the problem is within your organization – which may help speed up the selection and purchase of a solution to help monitor user behavior and activity.


    Insider threats represent one of the greatest challenges of organizations today. Not only are they capable of involving your organization’s most confidential and valuable data, but they are also the most difficult to identify. Insider risk begins the moment the employee steps foot in the door, and ends the moment the door permanently closes behind them. So, it’s important to follow this guide from beginning to end, to properly implement controls that protect the organization from insider risk at all stages of an employee’s tenure within the organization.

    By taking the steps outlined in this guide, you will have a better understanding of just how much insider risk exists, and – more importantly – where it exists. The guide also provided enough direction to put preventative steps in place to be able to thwart, detect, and – if needed – document insider threat activity.

  • Step 4 of 5 to Quantifying Insider Risk

    by Mike Tierney | Feb 11, 2017

    Align Risk Levels to Everyday Controls

    At a very high level, the risk scores equate to how much the organization sees the position, department, or individual in terms of potential exposure. Because a successful insider attack will result in harm to the organization, the appropriate response is to watch for signs or elevating insider risk (metastasizing into threat), using an appropriate level of scrutiny aligned to their risk level. In general, those with a lower level of risk only need to be monitored with a level of scrutiny that looks for leading indicators of elevating risk. Those posing a higher level of risk need to be monitored far more carefully –with an ability to rapidly review their actions in detail if necessary.

    You should group your assigned risk scores into two or more categories that correspond to implementations of the following technical controls (more detail on how to best take advantage of each of the technologies below is provided in the Guide’s Epilogue):

    Lower-Risk Everyday Control – User Behavior Analytics

    While those determined to pose a lower level of risk (as determined by the outcome of your Assigning Risk process) appear to be of no significant threat to the organization, it is critical to remember that risk can shift without warning, making it necessary to – at a minimum – analyze their behavior to proactively detect if the low-risk individual one day poses a higher risk based on leading threat indicators.

    User Behavior Analytics (UBA) watches both an individual’s interaction with company resources and their communications, baselining what is considered “normal” in order to detect anomalies that suggest an insider threat. Using a combination of machine learning algorithms, data science, and analytics, UBA can quickly identify when an employee is demonstrating behaviors synonymous with malicious insiders – or if an external actor intent on harming the organization has compromised the credentials of the employee.

    Higher-Risk Everyday Control – User Behavior Analytics + User Activity Monitoring

    For those demonstrating higher levels of risk, the organization needs to collect and maintain a system of record of their activity, while mining that activity for signs of insider threat. Employing UBA with a tighter sensitivity around anomalies makes

    UAM provides the organization with ability to record, alert on, and review insider activity. To demonstrate how UAM provides value, let’s re-use example of the Accounts Payable person in a construction company pulling a list of customers. With UAM, someone in IT or Security could be notified when an Export of details is run within the AP application. A review could then be performed by playing back the activity in detail before, during and after the export to see why the insider (now a POI) pulled the list of contractors and what they did with it.

    It’s this context that allows organizations to understand the intent of the employee. If it was found that the AP employee copied the exported data to a USB drive with no evidence of any request for it on the part of any superior in the company, you know you have an insider threat action. But if an email was received prior to the export from the CFO wanting to run an analysis on the data, and the export itself was printed out, it becomes clear it was an action take as part of doing their job.

    Aligning Controls to Risk Levels

    We’ve provided just two types of controls. But, based on organizational need and the chosen solution(s), you may desire to take your assigned risk scores and group them into more than just two control levels. It’s important to consider the capabilities of your chosen UBA and UAM solution(s), with an eye towards making sure they deliver the ability to:

    1. Analyze – the activity and behaviors in your organization,
    2. Detect – meaningful events or shifts that suggest imminent risk,
    3. Prioritize – where your focus should be by presenting only meaningful information without contributing to ‘over-alert syndrome’, and
    4. Respond – effectively and efficiently, without significant strain on the organizations people and resources.

    For example, you may have three control levels, representing those the organization deems are a low threat, those of medium risk that have access to some valuable – but not critical – data, and those of high risk with access to sensitive, confidential data.

    There will need to be some work done to align the specific features of UAM and UBA solutions back to the risk-mitigating intent of each of the risk levels. It’s this alignment that will help both choose the correct solution(s), while also establishing the right number of control levels.

  • Step 3 of 5 to Quantifying Insider Risk

    by Mike Tierney | Feb 08, 2017

    Define Risk Levels

    In order to establish controls that allow the organization to properly detect insider risk, you must first know where you should be looking. Each position within your company has a relative level of risk associated with it. For example, a position that has access to and works directly with intellectual property puts the organization at a much higher level of risk than someone who has limited access to customer contact data. A measured response is needed for each position, relative to its level of risk. Put not enough emphasis on monitoring risky users and you will find your organization a victim of an insider attack. Put too much emphasis on ‘eyes on glass” monitoring of users that pose no real risk to the organization, and you will have wasted time, budget, and energy.

    How Should You Assign Risk?

    So, you can see that it is important to first assign risk levels and then, based on the risk assessment, make decisions on the controls that should be in place. There are a few levels at which you can assess and assign risk:

    1. Based on Position – Risk can most easily and accurately be assigned by looking at a given role or position within the organization. While the person occupying a position may change over time, the position itself will have similar access, working locations, employee autonomy, etc.
    2. Based on Department – In some cases, an entire department – regardless of specific role – presents a similar risk to the organization based on their access to confidential information, an ability to transmit/export data, etc. A good example is the Sales department.
    3. Based on an Individual – In extremes cases, an individual may have extenuating access to company data regardless of title, position, or functional role, such as the founder of a company.

    The goal is to quantify a degree of risk using some method of scoring (can be 1-10, grading A-F, even by asking Y/N questions and adding up all the Y answers). The calculation method isn’t as important as is working through the assigning risk process and doing it consistently. The scores should be determined using a number of both objective and subjective criteria (to properly inject the organization’s view on the risk a position, department, or individual poses), such as:

    • Access to confidential information
    • Ability to export data
    • Ability to freely transmit data over unsecured channels
    • Amount of supervision
    • Whether they work locally or remotely
    • How much damage would a given employee (based on department, position, or themselves) do if they decided to steal information

    The list above is by no means comprehensive, but does provide direction around the types of criteria you should use to start developing a scoring system. The focus should be on the ways any employee can pose a risk to your organization, and how detrimental the repercussions of malicious actions would be if they were to be taken by a given employee.

    Once you have decided upon and finalized the questions used on your risk scoring worksheet, along with the associated scoring method, you will work through each of the positions, departments, and individuals, and have a number of scores.

    See Guide Essentials: Quantifying Positional Risk Worksheet – use this worksheet to see examples of how you might assign risk scores.

     It’s important that the criteria used be consistently across every single position, department, and individual. Why? Because when you run your very first assessment of risk and, based on your model, come up with a risk score of, say, 7 – what does that even mean? Right. Initially, nothing.

    It’s not until you look at various positions, individuals, and departments and begin to see the similarities and differences in how you scored each, and use those comparisons to group risk scores into simpler levels – such as Low, Medium, and High (shown below) – that will correspond to everyday controls you will implement to detect and prevent risk (detailed in the next section).

    Lastly, because risk will shift over time as new technologies, security policies, and IT processes are put in place, it’s important to perform a periodic review process to ensure the correct risk levels are assigned (and, therefore, risk controls are in place). This can be quarterly, semi-annually, or annually. You’ll need to decide how often to review both the questions and scoring system used.

    Once a given risk score has been defined for a given position, department, or individual, the score should be communicated - to HR to empower them as a source of intel around personal and personnel issues that may signify a need for elevated scrutiny by your security team, and to your security team itself so they can align proactive measures to risk.

  • Step 2 of 5 to Quantifying Insider Risk

    by Mike Tierney | Feb 05, 2017

    Adjust your Hiring Process to Address Insider Risk

    Insider risk begins the moment you grant access.

    What’s required on an employee’s first day is to present them with a Confidentiality & intellectual Property Agreement (CIPA). This agreement is designed to put a number of insider risk controls in place:

    1. Define Confidential Information – The new employee should understand what categories of information constitute confidential data and the organization’s intellectual property. By communicating what the organization considers confidential information, the new employee begins their employment aware of the organization’s desire to protect its’ confidential information. This should be very specific, relying on IT and Security staff to define what kinds of information are of a sensitive nature.
    2. Convey Confidentiality Requirements – An awareness of what actions are and are not appropriate when it comes to the handling of confidential information needs to be detailed. By describing how the organization wants an employee to conduct themselves when working with and when coming in contact with confidential information further conveys the organization’s strong stance on protecting its’ confidential information. Legal and Security staff can provide guidance on what kinds of actions are inappropriate to ensure employees understand their usage limitations.
    3. Communicate Expected Behavior – An emphasis on how the employee should err on the side of confidentiality should be communicated. By establishing expected behavior, the organization makes absolutely certain the employee has a clear picture of the organization’s assumptions around how the employee is to treat any confidential data.
    4. Inform of Need to Return or Destroy – The employee should have an understanding that any and all data that falls subject to the CIPA or is owned by the organization is expected to be returned or destroyed upon termination of their employment.

    This CIPA should be presented to every employee regardless of the employee’s position, title, level of perceived access to sensitive information, etc. The goal of the CIPA is to level-set every employee about how the organization seeks to safeguard their confidential information and the employee’s role in helping maintain that protection. This is something that is commonly, but not universally done. If you were asked to sign one when you started, you can feel good that your organization has addressed one of the most basic building blocks of an effective insider threat program.

    Making the CIPA Understandable

    Because the CIPA is a legally-binding document being given to people normally having little more experience with contracts than perhaps their mortgage, tenant, or car lease agreement, it is important to have the CIPA written using as close to “plain English” as possible. Using clear everyday language helps establish the effectiveness of the document as a deterrent, spelling out exactly what the organization defines as confidential and what it expects of an employee.

    It’s equally important to spell out those expectations and not have brevity be the default. For example, if the CIPA states that “all company data and assets must be returned”, does that mean an employee simply needs to forward a copy of an email they have, but can keep the original? Of course not. So the CIPA, in this instance, would need to use language like “return and destroy” spelling out that an employee (or contractor) is to have no physical or digital copies of any company data, emails, information, etc.