business.com receives compensation from some of the companies listed on this page. Advertising Disclosure

Home

More Info, More Problems: Privacy and Security Issues in the Age of Big Data

Jason Parms
Updated Jun 15, 2022

You need to take certain steps to protect your business data and your customers' privacy.

Today’s organizations can easily collect, analyze and store enormous amounts of sensitive data. Customers’ personal, payment and health information – along with a company’s intellectual property and pricing, cost, and employee data – are potentially vulnerable to would-be thieves, hackers, and even disgruntled or careless employees.

As data capacity grows exponentially, valuable data sets can make your organization a target. We’ll explore data collection’s privacy and security challenges and share data loss prevention measures to protect your data wherever it resides.

What is big data?

Big data operations gather data from multiple sources, combining in-house stores with data harvested from public sources like blogs, social media and clickstream data. They then store and analyze this data. 

To be considered big data, the information must have the three V’s: variety, volume and velocity.

  • Variety: Traditional data is usually one data type, such as text an organization can store in a database. In contrast, big data often includes various input types that can be difficult to categorize. For example, big data may comprise a combination of text, audio, videos and photos. To derive meaning from these varied data sources, additional processing is required.
  • Volume: Volume is the “big” in big data. With big data, there’s an enormous amount of data coming in and growing all the time. The amount can climb into the petabytes (1 PB equals 1,024 TB or 1 million GB) – much more than a regular database or other desktop software solution can handle.
  • Velocity: Big data is not a limited data set; it’s constantly and rapidly increasing. For this reason, there’s no time to go back and analyze or draw conclusions without sophisticated artificial intelligence or machine learning tools.

Did you know?Did you know? In a survey by MicroStrategy, 94% of companies said data and analytics are key to their business growth and digital success.

What are the privacy challenges with big data?

The more information you gather, the greater the likelihood of housing sensitive data valuable to cybercriminals. We’ll explore five significant big data privacy challenges and how you can mitigate them. 

Maintaining the security of infrastructure

Building big data infrastructure in-house is a major investment of time and money for research, hardware, software and countless other details, so most organizations turn to distributed computing, otherwise known as “the cloud.” 

It’s challenging to maintain security amid distributed computing’s components and framework.

The distributed computing framework, utilizing parallel computation across multiple worker computers, creates opportunities for security breaches. Identifying a malicious or unreliable worker computer and protecting the data from these unreliable processors is crucial for security solutions involving big data.

SOLUTION: Two techniques can ensure the trustworthiness of workers’ computers: trust establishment and mandatory access control (MAC).

  • Trust establishment: With trust establishment, workers are stringently authenticated and given access properties only by designated authorized masters. After the initial qualification, worker properties must be checked periodically to ensure they continue to conform to predefined standards.
  • MAC: With MAC, each worker’s access is constrained to a very limited set of tasks. Typically, in the MAC system, the ability of a user to control the objects it creates is highly restricted. MAC adds labels to all file system objects defining the appropriate access for each object and all users’ appropriately defined access.

FYIFYI: Building big data infrastructure in-house is a significant investment of time and money, so most organizations turn to the best cloud storage and online backup services. Since data must travel to the cloud solution, it’s vulnerable to interception in transit.

Safeguarding non-relational data stores

In finding solutions to big data management, many organizations migrate from a traditional relational database to a NoSQL (Not only Structured Query Language) database to deal with the unstructured data.

A significant gap in NoSQL database architecture is security. NoSQL solutions were initially built as solution-specific tools to operate within a larger framework, leaving security to the parent system. Additionally, the architectural flexibility that made NoSQL a good solution for multisourced data also leaves it vulnerable to attack.

SOLUTION: Enforce data integrity via the following best practices. 

  • Use SSL encryption. Database contents should be encrypted, protecting the data while sitting in the NoSQL database and providing additional protection while in transit to the cloud. Using SSL encryption to connect the client and server will ensure that only trusted computers can access the encrypted data.
  • Prevent unauthorized activity. Monitor logs on a real-time basis to spot anomalies that may indicate an attack. Use data tags and enforced time stamps to prevent unauthorized activity.
  • Integrate NoSQL databases into an overall framework. To maintain the overall system’s performance, scalability and security, NoSQL databases should be integrated into an overall framework, providing many of the system’s security components.

Preserving privacy in data mining and analytics

While big data can enable groundbreaking insights that lead to more effective sales and operations, it can also open the door for privacy invasions, invasive marketing, decreased civil liberties, and increased state and corporate control. 

You can process a surprisingly small amount of information to provide a complete picture of an individual. As a result, organizations that own data are legally responsible for the security and the usage policies they apply to their data.

Attempts to make specific data anonymous don’t successfully protect privacy because so much other information is available; some data can be used as a correlation for identification purposes. 

Data is constantly in transit, accessed by inside users and outside contractors, government agencies, and business partners sharing data for research.

SOLUTION: Even though there’s a cost in terms of money and system performance, privacy must be preserved for legal reasons. There are a few different approaches.

  • Differential privacy: Differential privacy is a formal and proven model for secure information sharing that comes with a great deal of system overhead.
  • Homomorphic encryption: Homomorphic encryption allows analytics to work with encrypted data. 
  • Standard privacy solutions: Older, more standard solutions include data encryption within the database, discretionary access control and stringent authorization policies. Keeping security patches up to date is also recommended.

A critical consideration in the implementation of privacy policies is that legal requirements vary from country to country. It’s essential to comply with the policies of the regions where you operate.

TipTip: Since many breaches are committed or enabled by employees, use access control systems to control physical access to computers, and create user accounts for each employee. Limit your employees’ data access and ability to install software.

Ensuring the security of encrypted data

There are two distinct approaches to applying security controls to control data visibility: 

  • Control access to the system.
  • Apply encryption to the data itself.

The first method is easier and less costly to implement but is less effective, providing a larger “attack surface.” If an attacker breaches the system, they have access to all the data. 

Deploying encryption on all data on a granular basis helps ensure that even if there is a system breach, the data itself remains protected.

SOLUTION: Use identity- and attribute-based encryption to enforce access control on individual objects through cryptography.

  • Identity-based verification systems: These business security systems can encrypt plain text so that only an entity with a specific identity can decrypt the text. 
  • Attribute-based encryption: This encryption applies the same controls based on attributes instead of identities. (A complete homomorphic encryption scheme, as mentioned above, would keep the data encrypted even while it’s being worked on.)
  • Group signatures: Another tool to help maintain privacy is the concept of “group signatures.” With group signatures, individual entities can access their data but publicly can be identified only as part of a group. Only trusted users would have access to specific identity information.

Implementing granular access control

When implementing big data security, you must respect privacy concerns while still permitting data usage and analytics. This is one of big data’s greatest privacy challenges. On one hand, it’s pointless to collect data without using it. On the other hand, a data privacy breach has legal and ethical implications and marketplace effects. 

Granular access control acts on each piece of data, thus ensuring a high level of security and usability. But there are three problems with effective implementation of granular access control:

  1. Tracking security and secrecy requirements and policies in a cluster-computing environment
  2. Tracking user access throughout all components of the ecosystem
  3. Proper implementation of security and secrecy requirements with mandatory access control

Additionally, choosing an appropriate granularity level is challenging and requires knowledge of the data store and analytics systems. Access levels include the following:

  • Row-level access (since a row typically represents a single record) is often used for data derived from multiple sources.
  • Column-level access – because a column represents a specific field for all records – is often used for sensitive elements because the identification columns are not necessarily available to users.
  • Cell-level access means a label is applied to every grain of information; it can support a wide range of usages and analytics. However, such a scheme must be rigorously applied to be effective, and in data sets this big, the overhead could be prohibitive.

SOLUTION: To cope with the complexity of tracking and implementation in big data environments where the scale is so extensive, experts recommend reducing the complexity of granular access controls on the application level. 

Instead, use the infrastructure to implement as much of the access control as possible, and adopt standards and practices that simplify whatever access controls still exist at the application level. This fits in well with the need to build a framework to support NoSQL security.

Ensure your access scheme assigns an appropriate level of granularity, balancing the size of the data store with the need for security.

Did you know?Did you know? ​​Big data can improve your social media marketing. Brands and marketers can access vast quantities of valuable data from social media sites to gain actionable insights.

What are the security risks of big data? 

Big data also presents a range of direct security risks. Here are five of big data’s most compelling security risks. 

Secure storage and transaction logging

To deal with petabytes of data, a form of storage called “auto-tiering” has become necessary. In auto-tiering, items are assigned a storage level automatically based on the organization’s policies. 

Auto-tiering creates vulnerabilities due to unverified storage services and mismatched security policies. Because data is moved automatically, rarely accessed critical information could end up on a lower tier with less security. Additionally, auto-tiering maintains transaction logs of its activities that must also be protected.

Auto-tiered environments are susceptible to two attack types: collusion and rollback.

  • Collusion attacks: In collusion attacks, service providers exchange keys and access codes, gaining access to more than the subset of data assigned to them.
  • Rollback attacks: In rollback attacks, an outdated dataset is uploaded to replace the latest version.

There are other vulnerabilities associated with big data storage: 

  • Confidentiality and integrity
  • Data provenance 
  • Consistency 

The need for availability also presents risks: Stored data must be available on demand, requiring the system to have hooks to retrieve the data. Hackers can then exploit these hooks.

SOLUTION: Technologies for dealing with some of these issues have become more robust in response to big data demands. These are some of the tech solutions:

  • Encryption is a crucial part of maintaining confidentiality and data integrity. 
  • Digital signatures using asymmetric encryption, regular audits and hash chaining can help secure the data. 
  • Persistent authenticated dictionaries (PADs) allow queries against older versions of the structure and can assist in identifying rollback attacks. 
  • Secure Untrusted Data Repository (SUNDR) is a network file system designed to store data securely on untrusted servers by checking consistency in the data fork. 
  • Digital rights management can address collusion attacks with varying levels of encryption and decryption policies.

The problem is not an absence of technologies, but rather the absence of an all-inclusive systemic approach or framework. The result is a patchwork of security policies instead of a unified structure where all parts work together to provide a sufficient security level.

Granular audits

The goal of real-time security monitoring is to raise the alert at the first sign of trouble. However, since that doesn’t always happen due to the challenges of identifying real threats among a huge number of false alarms, it’s essential to have frequent, granular audits to identify breaches after the fact. 

Audit information also helps identify exactly what happened so a similar attack or problem can be avoided in the future. An effective audit depends on four factors:

  • Completeness of the information required for the audit
  • Timely access to the information
  • Integrity of the information
  • Controlled access to the information to prevent tampering, thus compromising the integrity

SOLUTION: Start by enabling logging options for all components to ensure information completeness. This includes applications at all layers, including operating systems. Deploy a forensic or SIEM (security information and event management) tool to collect, analyze and process the log information. Do this outside of the infrastructure you use for the data so that it’s not subject to the same risks as the data being audited.

TipTip: Cyber insurance can protect your organization by covering its liability for any data breaches involving sensitive customer information.

Data provenance and verification

Big data is collected from various sources; in enterprise settings, this can mean millions of end-user machines. In this environment, data trustworthiness is paramount. As the volume grows, so does the complexity of the provenance. 

Provenance information is contained in the metadata attached to each data object, providing information about the object’s creation.

In big data applications, the provenance metadata includes the provenance for the big data infrastructure itself, which is like having meta-metadata. As development in this area progresses, provenance metadata will become more complex due to large graphs generated from provenance-enabled big data applications. Analytics for graphs of this size and complexity are very resource-intensive in terms of computational overhead.

There are two major threats to data integrity in big data applications:

  • Malfunctioning infrastructure 
  • Attacks on infrastructure from inside or outside the organization

The provenance metadata must be protected as well so that audits and other detection methods can be effective in verifying data sources.

SOLUTION: Very granular access control is an essential starting point for securing provenance and verification metadata. It is recommended that data-independent persistence should also be satisfied when updating provenance graphs. This means that even if a data object is removed, it might be an ancestor of other data; therefore, its provenance should be retained.

Furthermore, access control should be dynamic and scalable, and should use lightweight, fast authentication to reduce overhead. Secure channels between infrastructure components should be part of the architecture, and responsive, flexible revocation mechanisms should be included.

Endpoint input validation and filtering

Because of the breadth of data sources, including endpoint collection devices, a significant challenge facing big data schemes is whether the data is valid from the point of input. Given the size of the data pool, we must ask these questions:

  • How can we validate the sources?
  • How can we be sure that a source of input data is not malicious or simply incorrect?
  • How can we filter out malicious or unreliable data?

Data collection devices and programs are susceptible to attack. An infiltrator may spoof multiple IDs and feed fake data to the collection system in an ID clone attack or Sybil attack.

SOLUTION: Solutions must take two approaches: 

  • Tampering prevention
  • Detection and filtering of compromised data

However, it is virtually impossible to build a complex, extensive system completely resistant to tampering. Although there is no secure way to ascertain data integrity at the input, you can implement these three recommendations to deal with this situation:

  • Understand the inherent unreliability. The big data collection system design must consider this inherent unreliability and the inevitability of relying on untrusted devices and try to develop the most secure data collection platforms and applications possible.
  • Identify likely attacks. The system should be able to identify likely Sybil and ID cloning attacks and be prepared with cost-effective ways to mitigate the attacks.
  • Develop detection and filtering algorithms. Understanding that a determined adversary can infiltrate any existing system with false data, designers must develop detection and filtering algorithms to find and eliminate malicious input.

Real-time security monitoring

Real-time security monitoring is intended to alert the organization at the first sign of an attack. However, there is an enormous amount of feedback from SIEM systems, whose aim is to provide big-picture feedback on an organization’s data security in real time.

Few organizations have the resources to monitor this feedback with the kind of oversight and analysis necessary to identify actual attacks from false alarms. Privacy considerations drive the need for high security, but privacy laws must be navigated along with the analytics that will identify attacks.

SOLUTION: Big data analytics can be used to identify threats, including differentiating real threats from false positives. Logs can be mined for anomalous connections to the cluster. Improved analytics can also help separate the false positives.

Your big data framework must include analysis and monitoring tools. If they are not available within the framework, such tools can be placed in a front-end system whose job is primarily to provide the analytics needed to assess SIEM feedback and identify threats.

What is the best security software?

Most attacks on data start through everyday deception, such as stealing or capturing passwords, infecting company computers with viruses and malware, and breaking into the Wi-Fi network, either at the company location or when employees are working remotely.  

We examined and reviewed the best antivirus and internet security software solutions and have chosen two that can best help organizations mitigate big data’s security challenges.

Bitdefender GravityZone Ultra Suite

GravityZone Ultra provides layered endpoint protection for your company’s laptops, desktops and servers, as well as employees’ mobile devices. It can detect early signs of an attack and can even prevent some attacks before they happen. 

If an attack occurs, the system can mitigate the damage and disinfect and quarantine malware-infected software in real time. GravityZone monitors and archives the system’s processes, files, registry and more to speed recovery after an attack, and it includes a two-way firewall. 

Avast Business Antivirus Pro Plus

Avast is more than just antivirus software; it also has advanced security features. One example is its Smart Scan function, which detects vulnerabilities like unsafe system settings, bad passwords and sketchy browser add-ons. 

While Avast has a fairly robust free version, if you are dealing with big data, consider upgrading to its paid version, featuring a VPN, webcam protection, password protection, USB protection and patch management, to automatically fix software vulnerabilities where cyberattackers can enter your system. 

Early threat detection with big data analytics

An emerging security area uses big data analytics to detect threats at an early stage, applying pattern analysis to identify anomalies that may indicate a breach or a threat. It indicates that big data analytics is an effective technique with extensible applications.

In a game of chess, eventually, you’ll use the king to help in its own defense. Similarly, you can employ big data analytics to analyze situations, evaluate interactions and assist in data modeling realistic and effective solutions to its own security dangers. 

Jennifer Dublino contributed to the writing and reporting in this article. 

Image Credit:

SFIO CRACHO/Shutterstock

Jason Parms
Jason Parms is customer service manager at SSL2BUY LLC. He is responsible for administering the customer service division and ensuring the organization provides the maximum level of customer service. He has achieved his target very quickly through diversified SSL security products and incomparable support. Nowadays, SSL2BUY secures thousand of websites and have lots of smiles of happy customers.