What is Cybersecurity?

This question stumps the average person. How does one have a secure cyber-environment? What is going on in computers and IT systems that keep away the hackers?

Cybersecurity according to Merriam Webster is “measures taken to protect a computer or computer system (as on the Internet) against unauthorized access or attack.” These measures are administered by people, processes, and technology. The people part of cybersecurity are typically an organization’s Information Technology (IT) team who create the processes necessary to provide instruction for identifying and protecting against potential threats.

Ntirety Director of Cyber Security Operations Christopher Houseknecht considers himself a “computer geek” and has been interested in the operation and evolution of the cyber world for the last 25 years growing up with it and today working for our cybersecurity company, Ntirety.

“Everything from what kind of business I conduct on my phone, private, or business related, as well as the kind of things my children do, [cybersecurity] impacts me throughout every aspect of my life,” Houseknecht says.

Houseknecht as well as Chief Technology Officer (CTO) and SVP Development and Engineering Joshua Henderson both described cybersecurity as being in “layers.” Houseknecht says these layers are made up of components such as encryption, antivirus, endpoint detection response capabilities, and separation from the network or internet. Cybersecurity is not one singular layer of protection; there are numerous layers needed to fully protect precious data.

It is always important to have a backup plan. If the first line of defense falls through, your backup plan saves you from scrambling to assess how to handle a situation before it is too late. Similarly, cybersecurity must exist in “layers” so if the bad guys somehow find their way through the first layer, precious data is not lost and stolen.

Product Manager Dave Considine also emphasizes the importance of layered security. Considine describes this as giving someone access to a resource, but limiting what they can do within it. He explains that not everyone in a company should be able to access every resource.

Henderson describes cybersecurity as making sure data is safe and available, up and running for the people who need to and are meant to access it. It is the effort from the people, technology, and processes to keep the cybercriminals out. Houseknecht explains further that technology can only do so much; it is important to have a team of people and processes in place to guide the technology to do what it needs to do.

“[Hackers] don’t care whether you’re just an average Joe using computers to play video games or if you’re running a cybersecurity company.”

CEO Emil Sayegh emphasizes how important it is for businesses to have a comprehensive security plan and a partner operating 24/7 to protect themselves and their clients. He explains that one aspect of cyber protection will not defend against all possible cyber attacks. Phishing, malware, DDoS attacks and more require different solutions.

Handling cybersecurity internally as a business may seem like the easier and cheaper option, but there are so many products that must be invested in and many people constantly monitoring and operating the technology. In the long run, off-the-shelf security products can cost more as they keep piling on as the threats become more complicated and hackers become more sophisticated, not to mention the cost of hiring or training employees to tackle these evolving risks.

“That’s where someone like Ntirety has a really beneficial solution to most customers and companies out there,” Henderson says. “The average company is not going to really want to operate or find the staffing to do it the right way.”

While it is important to bring on a team of qualified individuals to help maintain the safety of normal IT-related business operations, it is crucial to abide by cybersecurity best practices every day on your own. Henderson and Houseknecht both mentioned the importance of having good cyber-hygiene. Cyber-hygiene is how someone presents themselves in the cyber-world. This includes practices such as not sharing passwords, not clicking suspicious links, using two-factor authentication, or not plugging in a USB that you are unsure of where it was from.

Houseknecht also expressed the importance of having resiliency in cyber-matters.

“Never assume it won’t happen to you,” Houseknecht warns. “[Hackers] don’t care whether you’re just an average Joe using computers to play video games or if you’re running a cybersecurity company.”

The recent cyberattack on IT software and management company SolarWinds, is an unfortunate example of a cybersecurity business that was hacked and faced disastrous consequences. The company works with businesses and government agencies, but it’s not just larger companies that need to worry.

So much of our lives exist online now — medical records, academic information, financial details and more are stored online. In addition to this, social media has become a way of connecting with family, friends, and businesses all around the world. There will always be people who will misuse resources and seek to steal private information for personal gain. But that’s where cybersecurity comes in to provide peace of mind through proactively keeping the bad guys out and keeping important data in.

The cyber-world has moved from a “perimeter” to a “distributed mindset,” according to Considine.

The “perimeter” concept of cybersecurity is an outdated approach, sometimes referred to as the “castle mentality,” and is defined as the idea that securing the perimeter of an IT environment (i.e. building castle walls and digging a moat) is enough. It is outdated because it ignores activity within the environment that may be malicious, and it is becoming more and more difficult to secure the perimeter of more advanced cloud and hybrid environments.

“Trust your instincts.”

Cloud services, capability, and computing have eliminated the perimeter mindset. People distributed across the world are able to access the services from anywhere thanks to cloud computing. With this greater access to resources, there is an even greater need for cybersecurity.

In addition to the cyber-world’s shift to distributed mindset, remote work became increasingly more common with cloud computing resources increasing, but especially after the start of the Covid-19 pandemic – pushing a huge portion of workforces to work from home and introducing a whole new slew of cyber-risks. More workspaces have adapted fully remote or partially remote work schedules and your security posture needs to adapt as well.

The effects of data theft can impact not only personal data and the terrible personal consequences that follow, but large businesses and landmarks, a recent example being the Colonial Pipeline. The oil pipeline system that stretches from Texas to New York is responsible for carrying gasoline and jet fuel to the southeastern portion of the United States, and it uses computerized equipment to help manage it. The ransomware attack hindered operations so much to the point that the President of the United States declared a state of emergency. The company ended up paying millions in ransom.

With computers making up so much of our daily social and business functions, cybersecurity must be at the forefront of our minds. Cybersecurity starts with you.

Sayegh urges anyone utilizing a computer or IT environment to be alert and aware to potential threats. Many times, cyber criminals express urgency in getting personal details from you, but Sayegh expresses the importance of always double checking sources, and never being too quick to give out information.
“Trust your instincts,” Sayegh said. “Anything that smells fishy [or is] too good to be true, don’t do it.”

Calculating the Real Cost of Downtime for Your Business

Be prepared for the worst-case scenario 

From startling headlines that have highlighted recent data breaches to the impending doom a single storm can spell for data centers, it becomes clearer every day that business continuity and disaster recovery are critical components to every IT strategy. While getting familiar with today’s modern IT threats, risks, and possible vulnerabilities within current systems is important, understanding downtime resulting from a disaster and its long-lasting repercussions—numbers unique to each individual business—is even more vital when designing an effective business continuity plan.

In order to determine the cost of downtime and its consequences due to an unexpected disaster, IT professionals first need to break down the overall elements that can contribute to it.

Where do the costs add up?

Time is money, so the saying goes, and the monetary impact of downtime impacts more areas than just the IT team including:

  • Idle workers across departments still on the clock but cannot perform their job duties
  • Physical damage to infrastructure, equipment or the building itself
  • Lost revenue due to inoperable POS or delivery of products to market
  • Hiring additional outside resources and specialists for data recovery
  • Repair or replacement of technology components
  • Reputation damage from vendors, clients and prospects

With the different elements to consider, it is little wonder that research by Gartner reports the average cost of IT downtime as $5,600 per minute. While that statistic may seem staggering—even unbelievable to some—finding the cost of downtime for an individual organization can be easily accomplished.

How to calculate the cost of downtime?

Calculating the unique cost of downtime can be done in terms of revenue loss and productivity cost. Both can be achieved (and reassessed over time) with clear formulas, using the information specific to the company.

To calculate revenue loss, gather the following information:

  • Gross yearly revenue (GR)
  • Total annual working hours (TH)
  • Percentage impact (I)
  • Number of downtime hours (H)

With the numbers identified, use this formula:

(GR/TH) x I x H = revenue loss

 

To calculate productivity cost, gather the following information:

  • Number of employees affected (E)
  • Percentage of employees affected (A)
  • Average cost of employees per hour (C)
  • Number of downtime hours (H)

With the numbers identified, use this formula:

E x A x C x H = productivity cost

Armed with real numbers, crafting the disaster recovery and business continuity plan to adequately prepare and protect an organization can become a priority supported throughout operations.

Control costs and continuity with a trusted IT partner

While the cost of downtime can be calculated with simple formulas, constructing worst-case scenario plans to minimize the impact of such costs is anything but simple. Engaging with experts to design recovery and business continuity plans not only ensures that every detail of an organizations IT systems has been accounted for, but also saves internal IT teams the time of being distracted by “what-ifs” instead of business goals. Ntirety Disaster Recovery (DR) Services help ensure mission-critical applications are safeguarded against malicious attacks, weather-related phenomenon, and other triggers for unexpected downtime. From platform management to continuous data protection and architecture design, Ntirety DR empowers enterprise companies to provide continuous service to customers and stakeholders with confidence.

Assess Your Security Posture

Due to limited time, resources, and expertise, prepping for disasters, avoiding security threats, and meeting ever-changing compliance regulations can be a huge source of pain for enterprise organizations. Take this quick interactive questionnaire to help determine if your strategy is broken. 

Security Gap Gives Hacker Access to 100 Million Bank Customers’ Personal Information

Capital One is the Latest Enterprise to Hit the Headlines Over a Data Breach

On Monday, July 29, 2019, Capital One Financial Corp. announced that more than 100 million of its credit card customers and card applicants in the U.S. and Canada had their personal information hacked in one of the largest data breaches ever.

Paige Thompson, a software engineer in Seattle, is accused of breaking into a Capital One server and gaining access to 140,000 Social Security numbers, 1 million Canadian Social Insurance numbers and 80,000 bank account numbers in addition to an undisclosed number of people’s names, addresses, credit scores, credit limits, balances, and other information. The Justice Department released a statement Monday confirming that Thompson has been arrested and charged with computer fraud and abuse.

As the CISO of a global IT solutions provider, I am always hesitant to comment on these situations because if it can happen to one of the biggest players in the industry, then everyone is at risk. Bad actors have unlimited time, resources and motivations—that’s why advancing a cybersecurity program is critical to every organization’s maturity process. We, the cybersecurity community, must do better collectively.

While the Capital One data breach is staggering with more than 100 million affected, this is just another event in a long list of massive data incidents during recent years, including Equifax, Marriott, Home Depot, Uber, and Target. Adding to the list of compromised information, “improper access or collection of user’s data” like Cambridge Analytica or WhatsApp have also made recent unsettling headlines.

Don’t Wait for Hackers to Find the Vulnerabilities from Within

Court filings in the Capital One case report that a “misconfigured web application firewall” enabled the hacker to gain access to the data. As infrastructures, support structures, and data flows become more complex, the security and need for visibility exponentially increases. Fundamentals like asset management, patching, and user access with role-based access is critical and cannot be over looked.

These pillars of protection are achievable with the help of experienced partners, like the managed security experts at Ntirety, focused on finding and filling any gap in existing infrastructure and applications.

Learn more about how Ntirety’s Managed Security services can be the better shield for your data against hackers. >>

Take Charge on a Personal Level by Using a Passphrase

Even with all the internal work and effort businesses put towards protecting data, consumers should still take precautions and be proactive protecting their identity. Never give personal information out over the phone—even if the caller appears to be from a reputable organization like Capital One. Phishing scams through calls, emails, and text messages are only increasing. Even offers for IT protection from unvetted parties can be attempts to gather or “fill in” additional information for malicious purposes.

One of the quickest ways to boost protection of your personal information is to change your password to a passphrase. Create a great passphrase in three easy steps:

  • Use personally *meaningless* passphrases
  • A pseudo-random mixed 15-character password
  • Pick a minimum of 4 words—RANDOMLY

Simply combining random words (like DECIDE OVAL AND MERRY = Decide0val&andmerry) can build a new passphrase far more secure than “12345” or “password1”.

Let Partners Provide You Peace of Mind Against Security Threats

While every individual should be an active participant in protecting their identity and personal data, enterprise companies can’t ignore the devastating regularity of these hacks and breaches. IT security is a crucial component for any modern business, and equally important is the constant vigilance to keep those security measures validated and updated. Vulnerabilities emerge with every new technological advance, making an experienced partner to keep a steadfast watch necessary to allow organizations’ own IT teams to focus on innovation and business goals.

Ntirety’s Managed Security services bridges the gaps every company faces as systems, tools, and data grow rapidly. Expert monitoring and risk reduction and mitigation from trusted IT partners empower internal teams to focus on pushing business forward. Don’t trust that your basic security is enough to keep your company out of the hacker headlines—get real peace of mind with cybersecurity experts like Ntirety watching your backend systems, infrastructure, and applications.

Schedule a consultation with Ntirety today to proactively protect your data from hacker threats and data breaches.

CISO Chris Riley’s CloudEXPO Presentation: The Great Migration: Retreat from the Cloud Sacrificing Security?

On June 24th, Ntirety CISO Chris Riley was proud to present The Great Migration: Retreat from the Cloud Sacrificing Security? at the 23rd International CloudEXPO conference in Silicon Valley. With over 20 years of enterprise IT experience, Riley brought unparalleled perspectives to the CloudEXPO stage on the current state of IT security, including shared concerns, hidden risks, and the tested tactics to protect data.

Security Threats Remain Even in New Cloud Solutions

Migrating to the cloud provides numerous benefits to enterprise organizations, but do-it-yourself or one-size-fits-all approaches to cloud selection and management has created a number of concerns for internal IT teams across industries. This phenomenon has led to a shift away from the one-size-fits-all approach to more hybrid cloud options, as noted in Ntirety CEO Emil Sayegh’s keynote presentation. However, while hybrid solutions do eliminate issues relating to cost and performance, it can still leave gaps in security and compliance.

Despite the advances hybrid and multi-cloud options bring, threats can spring from a variety of both external and internal sources. Calling these threats the “Treacherous 12,” Riley shared the most critical issues that plague cloud security from a survey by Cloud Security Alliance:

  1. Data Breaches
  2. Weak Identity, Credential and Access Management
  3. Insecure Application Programming Interfaces (APIs)
  4. System and Application Vulnerabilities
  5. Account Hijacking
  6. Malicious Insiders
  7. Advanced Persistent Threats (APTs)
  8. Data Loss
  9. Insufficient Due Diligence
  10. Abuse and Nefarious Use of Cloud Services
  11. Denial of Service
  12. Shared Technology Issues

From massive data breaches, to the headaches of employees sharing passwords, these challenges exist—knowingly or unknowingly—for all organizations in the cloud. ⁠

Combatting Risks with Better Internal Tactics

Although the above list may seem daunting, Riley illustrated to CloudEXPO attendees that there is hope. Visibility, segmentation, automation—all these modern cloud security pillars are achievable through more detailed and dedicated processes, like enforcing access control, re-architecting systems, and monitoring behavioral activity.

All the elements for better security and data protection are obtainable, Riley explained, if cross-functional internal teams can work together and prove that investing in greater measures is not only worthwhile but vital for every cloud solution.

“The fact of the matter is we have to demonstrate the value, we have to enable the business, and we have do it in near real-time fashion,” said the Ntirety CISO to his audience. “Because the business isn’t going to wait for us.”

Bringing diverse members of a company’s team together for increased communication is a key component to implementing any new security strategy or process, specifically the imperative collaboration necessary between the departments of Development, Security, and Operations. Coining this as the “trifecta of success”, Riley emphasized how encouraging frequent and in-depth conversations between DevSecOps will “inherently have a strong mentality to code things right, to secure things appropriately, and to allow the business to be successful.”

Better Security from the Inside Out

The concerns are real and more relevant than ever, but so are the tactics to tackle them, Riley ensured his audience in Silicon Valley. He elucidated on the current state of IT security—the good, the bad, the ugly—and ways enterprise companies can stay ahead of the threats. For CloudEXPO attendees, understanding the practical ways Riley outlined to protect systems and data in today’s increasingly insecure world were just the kind of insights enterprise IT professionals look for: identifiable risks, actionable plans, and sustainable methods.

Ready to get your own IT security insights from trusted cloud experts? Schedule your consultation for better data protection today!

Keep Your Company Out of the Shocking Data Breach Headlines

Rising Statistics Show Internal Security is Not Enough to Protect Data

On Monday June 3, Quest Diagnostics, the largest blood-testing company in the world, reported that nearly 12 million patients’ personal information, including financial data, social security numbers, and medical records, was exposed through a data breach at a third-party billing collection agency. While lab results were not affected, the sheer number of patients affected makes this event the second largest healthcare data breach ever reported, following only health insurer Anthem’s 78.8 million record data breach in 2015.

The Overlooked Third-Party Risk

How could a global company like Quest’s patient data be so vulnerable? The risk did not come from within the enterprise healthcare company, but through a data breach by American Medical Collection Agency (AMCA), a third-party billing collection service vendor providing services to Quest’s healthcare revenue manager, Optum360 LLC.

External entities like AMCA are widely used across industries. A recent Deloitte poll found 70% of enterprise businesses report a moderate to high reliance on third-party services, but all the rewards come with equal risks. The same poll found that 47% of the organizations surveyed had experienced a risk incident involving the use of third-party services in the last three years.

Quest is Not Alone and That’s Not a Good Thing

Healthcare is an appealing target for hackers, and third-party services have provided the perfect backdoor access to data for several major breaches in 2019.

Just one day after Quest made their announcement, diagnostics company LabCorp reported nearly 7.7 million patients’ personal data was exposed as a result of a massive breach at the same third-party billing collection agency as Quest: AMCA. Additionally, Rush System for Health reported in March 2019 that the personal information for approximately 45,000 patients was compromised due to their third-party claims processing services vendor, and Emerson Hospital reported around the same time that 6,314 patients had portions of their protected health information exposed due to a security breach at a third-party services vendor.

Beyond healthcare, big-name companies across industries have made headlines due to compromised data, including Target, Home Depot, Applebee’s, and Saks Fifth Avenue. A 2018 study by Opus & Ponemon Institute found that 59% of companies experienced a third-party data breach that year, but a mere 16% claimed they effectively mitigated third-party risks. While it may seem obvious that outside entities can create security gaps, it appears dedicated evaluation and management of these additions can often be substandard, with only 37% of the study’s respondents indicated having enough resources to manage third-party relationships.

Cautionary tales featuring global healthcare companies, retail giants, and national restaurant chains might be enough to change those eye-opening statistics, but lawmakers are now asking impacted companies about “vendor selection and due diligence process, sub-supplier monitoring, [and] continuous vendor evaluation policies,” and pointedly asking about the recent breach headlines “how many times has Quest Diagnostics conducted a security test which evaluates both Quest Diagnostics’ systems as well as the systems of any companies it outsourced to?”

Don’t be in the News for a Breach and Don’t be a Statistic – Here’s How

First, following best practices and compliance mandates can set enterprise organizations up to better protect their data from any vulnerabilities third-party entities present, including:

  1. Regularly scheduled vulnerability assessments
  2. HIPAA-required risk assessments for healthcare organizations
  3. Dedicated security management and monitoring
  4. Disaster Recovery planning

BAAs are Necessary but Not Sufficient

Enterprise companies must always ensure that they have a solid and trustworthy partner that can deliver secure infrastructure with a comprehensive Business Associate Agreements (BAA). A BAA acts as a binding contract to create liability between the company and vendor that upholds both parties to stringent HIPAA regulations, but more can be done to truly ensure security for critical data. Ntirety provides peace of mind with industry-leading BAAs and more so with our HITRUST CSF Certified status, demonstrating that all the certified applications appropriately managing risk by meeting key regulations and industry-defined requirements. “HITRUST CSF is the gold standard,” says CEO Emil Sayegh. “In the face of mounting data breaches, companies handling sensitive data must remove all doubt by working with trusted cloud providers with deep experience in security protocols and regulatory compliance.”

Trust is Possible with the Right Third-Party Vendors

Whether starting for square one or proactively planning for a worst-case scenario, organizations can avoid a data breach disaster at the hands of a third-party vendor with diligent vetting, managing, and planning – all of which can be time-consuming and drain resources, falling back to the 37% statistic above.

Meeting HIPAA compliance and setting strong BAAs are a good start, but with the help of experienced HITRUST-certified experts, businesses can better trust their third-party associates. Like an extension of their own teams, Ntirety guides and supports with our detailed and compliance-focused assessments, steadfast monitoring, and rigorously tested recovery plans. Ntirety is ready to meet any organization’s needs, such as our client BlueSky Creative, Inc. who had “a lot of questions and need[ed] to be 100% confident in the provider”, but Vice President Stephanie Butler explains that with Ntirety “from day one, all my questions were answered, and I was given all the guidance I needed and more.”

As a tenured IT services company with over 20 years of experience, Ntirety solutions meet compliance for PCI, HITRUST, HIPAA, FERPA, and GDPR guidelines, and our BAAs strengthen the mutual commitments to safeguard customer data. Our design for data security thoroughly evaluates all third-party vendors and how they interact with all systems and platforms and continue with safeguard evaluations, so no customer ever has to worry about becoming a statistic.

Schedule a consultation with Ntirety to protect your data and keep your third-parties secure.

The Many Names and Faces of Disaster Recovery

When discussing disaster recovery, people often throw out a variety of words and terms to describe their strategy. Sometimes, these terms are used interchangeably, even when they mean very different things. In this post, we’ll explore these terms and their usage so you can go into the planning process well-informed.

Disaster Recovery:

This is a term that has been making the rounds since the mid- to late-seventies. Although the meaning has evolved slightly over time, the disaster recovery process generally focuses on preventing loss from natural and man-made disasters, such as floods, tornadoes, hazardous material spills, IT bugs, or bio-terrorism. Many times, a company’s disaster recovery plan is to duplicate their bare metal infrastructure to create geographic redundancy.


Recovery Time Objective (RTO): 

As you build your disaster recovery strategy, you must make two crucial determinations. First, figure out how much time you can afford to wait while your infrastructure works to get back up and running after a disaster. This number will be your RTO. Some businesses can only survive without a specific IT system for a few minutes. Others can tolerate a wait of an hour, a day, or a week. It all depends on the objectives of your business.


Recovery Point Objective (RPO):

The second determination an organization must make as they discuss disaster recovery is how much tolerance they have for losing data. For example, if your system goes down, can your business still operate if the data you recover is a week old? Perhaps you can only tolerate a data loss of a few days or hours. This figure will be your RPO.


IT Resilience:

This term measures an organization’s ability to adapt to both planned and unplanned failures, along with their capacity to maintain high availability. Maintaining IT resilience is unique from traditional disaster recovery in that it also encompasses planned events, such as cloud migrations, datacenter consolidations, and maintenance.


Load Balancing:

To gain IT resilience and keep applications highly available, companies must engage in load balancing, which is the practice of building an infrastructure that can distribute, manage, and shift workload traffic evenly across servers and data centers. With load balancing, a downed server is no concern because there are several other servers ready to pick up the slack.

Streaming giant Netflix often tests the load balancing ability of their network with a proprietary program called Chaos Monkey. Using this tool, they ensure that their infrastructure can sustain random failures by purposefully creating breakdowns throughout their environment. This is a great example for companies to follow. Ask yourself: What would happen if someone turned off my server or DDOSed my website? Would everything come crashing to a halt if an employee accidentally deleted a crucial file?


Backup:

Backups are just one piece of the disaster recovery puzzle. Imagine if you took a snapshot of your entire workload and replicated it on a separate server or disc—that is a backup. With backups, you always have a point-in-time copy of your workload to revert back to if something happened to your environment; however, anytime you must revert to a backup, anything created or changed between the time the last snapshot was taken and the time the disaster occurred will be lost.


Failover Cluster:

Another piece of the disaster recovery puzzle, failover clusters are groups of independent servers (often called nodes) that work together to increase the availability and scalability of clustered applications. Connected through networking and software, these servers “failover,” or begin working, when one or more nodes fail.

Which type of failover server you choose depends on how crucial the system is, along with the RPO and RTO objectives of the disaster recovery plan. Failover servers are classified as follows:

  • Cold Standby: Receives data backups from the production system; is installed and configured only if production fails.
  • Warm Standy: Receives backups from production and is up and running at all times; in the case of a failure, the processes and subsystems are started on the warm standby to take over the production role.
  • Hot Standby: This configuration is up and running with up-to-date data and processes that are always ready; however, a hot standby will not process requests unless the production server fails.

Replication:

This term represents the process of copying one server’s application and database systems to another server as part of a disaster recovery plan. Sometimes, this means replacing schedule backups. In fact, replication happens closer to real-time than traditional backups, and therefore can typically yield an adherence to shorter RPO and RTO.

Replication can happen three different ways:

  • Physical server to physical server
  • Physical server to virtual server
  • Virtual server to virtual server

Database Mirroring:

As with backups and replication, database mirroring involves copying a set of data on two different pieces of hardware; however, with database mirroring, both copies run simultaneously. Anytime an update, insertion, or deletion is made on the principal database, it is also made on the mirror database so that your backup is always current.


Journaling:

In the process of journaling, you create a log of every transaction that occurs within a backup or mirrored database. These logs are sometimes moved to another database for processing so that there is a warm standby failover configuration of the database.


At the end of the day, what you really need is business continuity.  

A well-formed business continuity plan will use all of these methods to ensure your organization can overcome serious incidents or disasters. Going beyond availability, business continuity plans determine how your business will continue to run at times of trouble. Can your business survive a systems failure? Can it survive a situation where your offices burn down? How quickly can you access your mission-critical data and mission-critical applications? How will people access your mission-critical applications while your primary servers are down? Do you need VPNs so employees can work from home or from a temporary space? Have you tested and retested your business continuity plan to ensure you can actually recover? Does your plan follow all relevant guidelines and regulations?

The right mix of solutions will depend on the way your business operates, the goals you’re trying to achieve, and your RPO and RTO targets. In the end, the resilience of any IT infrastructure or business comes down to planning, design, and budget. With the right partner to provide disaster recovery and business continuity management services, you can come up with a smart plan that proactively factors in all risk, TCO goals, and availability objectives.

To start planning your own battle-tested IT disaster recovery plan and business continuity strategy—and ensure that your business is ready for anything—contact one of our experts for a free risk assessment today.

IoT Privacy Threats and the 7 Best Ways to Avoid Them   

Things are getting smarter. From manufacturing to healthcare to the everyday devices in houses and cars, nearly every industry is looking for more ways to integrate the IoT’s remote monitoring and tracking capabilities into their everyday operations. For organizations that haven’t adopted IoT protocols yet, it’s only a matter of time until they do. A recent study projected that more than 24 billion internet-connected devices will be installed worldwide in the next two years. That equates to more than four IoT devices for every human on the planet, prompting new concerns about security and privacy—and rightfully so, because with more connectivity and an increasing amount of data being transferred comes more vulnerability.

What does this mean for end-users and organizations 

Without the right protections in place, a hacker could easily gain access to the network-connected devices that surround you every day, changing the temperature in your house or controlling your car stereo. There’s even the potential for these privacy and safety breaches to go beyond mere annoyances, turning the issues into one of life or death. Imagine, for instance, if criminals could use IoT-enabled home devices to track a family’s comings and goings, or if they found a way to hack into an IoT-enabled insulin pump or pacemaker, taking their victim’s health hostage in the process.

Developers must take these risks into consideration as they build products and software that are IoT enabled. Further, CIOs and CTOs should take note—your risk profile has changed. Any deception—whether executed deliberately or by mistake—will likely be perceived as your fault. All of this means that for society to accept your IoT-enabled devices and software, or for companies to accept IoT-enabled devices into their organizations, you must make privacy and safety your first priority—no exceptions.

What are the most common types of privacy concerns?

This is an experiment, and end-users are part of it. 

Much to the delight of those who want to mine data from consumers for advertising or other more nefarious purposes, the IoT is a jackpot of personal data. Every day, consumers are becoming the subjects of behavioral experiments that they didn’t sign up for. Recently, for example, it was discovered that Roomba was sharing information on its customers’ home dimensions with advertisers without asking permission to do so. And much of the United States was infuriated when it was discovered that Facebook data once thought to be private was sold to a political firm in an effort to influence their behavior.

All of this begs the question—if we can’t socialize with our friends online or vacuum the floor without being tracked, what does this mean for the devices in our lives that keep us healthy or safe? Could the biometrics pulled from your fitness tracker be used to determine your fear level, propensity to be intimidated, anxieties related to finances, or more? End-users want to know that when they interact with IoT devices, they won’t become a guinea pig.

More endpoints, more problems. 

To the delight of clever cybercriminals, the IoT also offers more endpoints to attack. If a person hacked into a computer or smartphone that controlled other devices, they may also be able to gain control to those secondary devices. In other words, they can attack an entire network of devices by gaining access to just one.

IoT vulnerabilities are already showing. 

Just after its release, a Google Mini was found to be recording everything it heard, and an Amazon Alexa recently recorded and sent a family’s private conversation to a random contact without permission. Not long ago, Google Home and Chromecast found that some of its user’s locations could be tracked within minutes of clicking on an innocent-looking link they received from phishing scammers. Although all of these issues have since been fixed, we know that they are only a few of the many issues out there—what will be next?

Devices are building more public profiles for their users—and planting the seeds for discrimination.

As users interact with IoT devices, the data gathered from each one is compiled into a profile. If that profile goes public for any reason, the user is then at risk of facing discrimination for employers, insurance companies, and other agencies. While there are laws in place to avoid discrimination against protected classes of people, some experts believe that government agencies aren’t prepared to handle IoT-based discrimination.

This is especially true when you consider how hard it can be to detect and prosecute the most traditional forms of discrimination. It was recently revealed, for example, that roughly 60% of jobs eliminated by IBM in the 1980s were those held by employees ages 40 and over. As IoT-enabled devices become more prominent, there is a fear that this sort of discrimination will go beyond age, race, and sexual orientation to include buying habits, physical activity, and much more. The possibilities for abuse are limitless.

What about compliance?  

Although government agencies are keeping a close watch on IoT technologies and the potential security concerns they pose, most compliance standards for IoT data security and privacy haven’t caught up entirely yet. The European Union now has GDPR, a set of legal regulations that includes guidelines for data collection and personal information processing. Developers should be aware of the ways these new regulations could extend to the IoT. Additional rollouts of more GDPR regulations are expected to come.

Medical devices could be one of the more frightening prospects to consider when it comes to data privacy compliance. Just one IoT breach could involve multiple HIPAA violations. A recent report on penetration and security risks classified the healthcare sector as one of the worst performing sectors when it comes to system security. The FDA has already issued recommendations on how healthcare facilities can ensure their devices stay protected, yet the complexity of monitoring all that data on so many devices is beyond the reach of many individuals and organizations. That’s why protecting user and data privacy is a must for any IoT system to be truly secure and fully accepted.

So, how can developers help protect the privacy of end-users?

1. Build your apps with security in mind.

CTOs and Application Developers must take a deliberate approach to building data privacy and security into every layer of their app. To best protect data across the board, IoT applications should align to the principles of:

•  Data privacy: A stored data record must not expose undesired properties, such as the identity of a person. This one area is a huge challenge for IoT—and IT in general. It was hard before, and now it’s harder.

•  Anonymity: The property of a single person should not be identifiable as the source of data or an action.

•  Pseudonymity: Link the actions of each person with a pseudonym, or random identifier, rather than an identity. This trades off anonymity with accountability.

•  Unlinkability: This qualifies pseudonymity in the sense that specific actions of the same person must not be linked together, effectively protecting against profiling.


2. Encrypt everything.

Use strong encryption across all of your devices and network, and never allow users to export data beyond its native application unless they’re entitled to do so. Your encryption should include:

•  AES-256 symmetric encryption for data stored to disks or archived

•  Bcrypt for one-way encryption of passphrases as needed

•  Mediated access to data classes via capability grants at the user/role level


3. Use role-based authentication and authorization.

User roles should always be defined by capabilities rather than via a structure built on Super Users. Each user in the structure should be anonymized so they are only traceable through event streams with privileged knowledge.


4. Set up multiple access layers, then carefully secure and monitor your data.

Every data store should have mandatory access controls, and those for interfaces and web application services should have discretionary access controls. You should also set up:

•  Firewalls and webs to protect your environment from known threats

•  Logins and permissions for presentation level code

•  SSL and SSH to protect your network

•  Multifaceted passwords or dual authentication where applicable


5. Log everything.

The security applications you use should log all security events within the platform in one centralized place, and you should always have access to an audit trail that provides a reconstruction of events. Audit trails should include:

•  Timestamps

•  Processes and process interactions

•  Operations attempted and executed

•  Success and failure elements in events


6. Be mindful of dependencies.

This is especially important when you are leveraging opensource. Don’t rely on other people’s code to be secure unless you are absolutely sure those individuals can be trusted and you have layered security to protect your organization—and your reputation.


7. Use trusted vendors

From the collaboration tools your organization uses to the infrastructure your products and services are built on, it’s important to partner with vendors that are compliant and put safety first.  Healthcare organizations that work with HIPAA-compliant and HITRUST-certified vendors, for instance, can expect to be exposed to fewer risks due to the rigorous and standardized methods of securing protected health information that these vendors must endure.

Ensuring IoT privacy and security can be a big undertaking, especially for smaller companies with limited IT resources that are already spread thin. That’s why many successful organizations are partnering with managed hosting providers. These expert teams can help keep business data private and secure without having to add costly resources, and allow organizations to transfer some of that risk to a trusted partner and expert.


To learn more about how we can help you protect your customer or patient data with simple, streamlined, and fully-managed hosted solutions, schedule a consultation today.