Thursday 20 April 2023

Cloud Security and Privacy


Cloud Security and Privacy

Chapter 17


17.1 Cloud-Specific Security Problems 

Several factors contribute to the complexity of managing cloud computation

  • Lack of control and visibility 
  • An infrastructure shared with outsiders 
  • Many services with interdependencies among them. 
  • Dynamic execution environment with bursts 
  • Remote access for all users
  • Extensive use of software from the cloud providers and third parties

Lack of control and visibility.

    In a cloud environment, a tenant cannot configure or examine the underlying systems

Must trust that the provider’s staff has configured security protections correctly.

An infrastructure shared with outsiders

running multiple tenants on the same infrastructure increases security risks, and     security and privacy breaches have occurred.

Many services with interdependencies among them.

having many microservices and allowing frequent communication among them     increases the attack surface,

Dynamic execution environment with bursts.

Rapid creation of new instances makes it more difficult to distinguish normal execution from a Denial of Service (DoS) attack

Remote access for all users.

    the use of remote access increases the possible attack surface

Extensive use of software from the cloud provider and third parties.

Building cloud-native software has added complexity, such as designing microservices that can be orchestrated. 


17.2 The Zero Trust Security Model 

How can a tenant know which individuals should be allowed to access and manage services? 

The answer lies in a zero trust security model. 


The idea is straightforward: assign each user a set of privileges for each possible service. Instead of merely allowing a user to login once and then have access to all services, validate each request separately. That is, whenever a user attempts to access a service, use the identity of the user (and possibly the identity of the user’s device) to decide whether to grant or deny the request. 


17.3  Identity Management 

To avoid having separate authentication for each service, an identity management system uses a Single Sign On (SSO), which means a user has the same login and password credentials for all services.


An Identity Management system

  • stores information about users’ identities and their access rights, 
  • uses a single login for all services, 
  • authenticates users, 
  • ensures only authorized users access each service, 
  • and allows a user to enter credentials once for each task. 


17.4  Privileged Access Management (PAM)

  • A Privileged Access Management (PAM) system handles identity management for privileged accounts (administrative or superuser level of privilege).
  • PAM systems log all accesses, providing a detailed record of which individual accessed a system at a given time. 
  • The systems also record failed login attempts, providing a way to track attackers.
  • If a staff member’s ID is used to access multiple secondary systems within a short time, the PAM system sends an alert to the manager reporting suspicious activity.


17.5 Protecting Remote Access

Three principles help define security practices: 

  • Keep all communication confidential 
  • Protect and isolate business data 
  • Enforce workflow security

Keep all communication confidential

  • To keep communication confidential, all data must be encrypted.
  • Virtual Private Network (VPN), technology forms a connection to the organization’s cloud data center and encrypts all packets sent over the connection


Protect and isolate business data. 

  • Remote access introduces an additional danger:an employee may lose a device that contains confidential business data.
  • To protect it from accidental loss, all business data stored on a user’s device must be encrypted.

Enforce workflow security. 

  • As an employee performs a task, data may move from the cloud to the employee’s device and back.
  • To ensure that all data remains safe independent of its location, an organization must define a security policy for each workload.
  • Industry uses the term workflow security to characterize the approach.


17.6 Privacy In A Cloud Environment 

  • Security systems enforce protections to guarantee the confidentiality, integrity, and availability of data.
  • Privacy refers to keeping sensitive information about an individual safe from public dissemination.
  • Privacy protection cannot focus only on the data at hand.
  • In addition, other aspects of security must be chosen to prevent violations of privacy.


17.7 Back Doors, Side Channels, And Other Concerns 

  • Interestingly, privacy considerations intertwine with other aspects of data security.
  • An organization must be careful to avoid possible back doors and side channels that allow data to leak to unintended recipients. 
  • Three points of contact between applications cause concern: 
  • shared storage systems, 
  • shared computing systems
  • shared networks
  • Instances have occurred where flaws, including flaws in the underlying hardware, allow data to transfer from one application to another through a side channel.
  • In addition, attackers try to insert back doors into systems that send copies of data to the attacker.
  • Tenants cannot predict or prevent flaws from occurring, but they can remain aware of the potential danger, and work hard to detect back doors.

A tenant must view a cloud service provider as a partner and work closely with the provider on security, both to ensure policies have been configured correctly and to handle problems that arise in the underlying infrastructure.


17.8 AI Technologies And Their Effect On Security 

Interestingly, Artificial Intelligence (AI) technologies influence  security in both positive and negative ways. 

  • On the one hand, attackers can use AI techniques to bypass safeguards and gain unauthorized access to data or systems. 
  • On the other hand, AI techniques can be used to strengthen safeguards.

Negative impact of AI

The approach is known as a deep fake. 

An AI program that used machine learning (ML) was first fed recordings of the voice of a company executive so the program could learn the executive’s voice and speech patterns. 

Once the learning phase was complete, the program imitated the voice while pronouncing text from a script. 

The attacker left voice mail for an employee, apparently from the company executive, asking the employee to transfer confidential data to an external site.

Positive impact of AI

  • PAM systems can use AI software to detect various types of anomalous behavior of an employee under the general topic of security analytics.
  • If a machine learning program receives notification of each network connection that arrives from the Internet, the program may be able to detect an instance of a Distributed Denial of Service (DDoS) attack. 
  • One final aspect of security analytics arises from context.
  • Suppose a given user always logs into the HR system and uses the system to update the employee database.
  • Security analytics software takes another view: it flags the access as unusual because it does not occur in the usual context.
  • Security analytics software can be context-aware. 

Wednesday 19 April 2023

Edge Computing and IIoT


Chapter 16

Edge Computing and IIoT


16.1 Introduction

Although the cloud approach has many advantages, it does have the disadvantage of introducing higher network latency because a data center is remote from the customers it serves.


Note: Latency is the time it takes for data to pass from one point on a network to another.


Cloud providers attempt to minimize network latency in two ways: 

  • The use of multiple, geographically diverse sites 
  • Low-latency network connections


16.2 Situations Where Latency Matters 


  • In the financial industry, for example, a small delay in making stock trades can result in a huge loss.
  • In the health care industry, a small delay in receiving data from a patient monitor can delay activation of an implanted medical treatment device.
  • Industries that employ real-time control systems — sensors and actuators that monitor and control processing — rely on low delay.

The following table lists some of the industries for which low latency can be important.


16.3 Moving Computing To The Edge

To meet the requirements for low latency in cloud computing, edge computing architecture is introduced.




Edge computing

  • Place some of the computing facilities near each source of information, and perform initial processing locally. 
  • Computation performed near devices (smartphone) occurs near the edge instead of sending the data to a centralized cloud data center for processing.



Simultaneously run applications in a cloud data center, and use the cloud applications to handle computational-intensive tasks.


The following illustrates how edge computing works:

Lets assume the sensor monitors users heart ECG.

  1. The data from the sensor is sent to the mobile phone to which the sensor is connected through an app.
  2. The health care app running in the smartphone checks for any abnormality in the ECG signals.
  3. If any abnormality is found, immediately a notification is sent to the doctor and the patients relatives for immediate action.

The edge computing approach places small, auxiliary data centers near locations that require low latency responses. 


Software in the edge data center handles low-latency computations locally and runs other computations in a cloud data center. 


16. 4 Extending Edge computing to Fog hierarchy

  • The locations and sizes of the edge data centers depend on the applications being supported and the latency requirements. 
  • To achieve the lowest possible latency, an edge facility must be as close to each user as possible (e.g., in each cell tower). 
  • For applications with less stringent requirements, an edge computing facility might serve a neighborhood of multiple cell towers or a geographic region with many neighborhoods. 
  • To serve all applications, edge facilities can be organized into a hierarchy



Industry sometimes uses the term fog data center to refer to an intermediate data center that serves a larger geographic area.


16.5 Edge computing and IIoT

The term Industrial Internet of Things (IIoT) refers to an enhanced, larger-scale version of the Internet of Things. 

The primary difference between consumer IoT systems and Industrial IoT systems lies in the importance: a company depends on an IIoT system as part of a critical business function.


Characteristics found in IIoT applications

  • Specific latency requirements
  • Geo-spatial knowledge
  • Large volumes of data with various QoS requirements
  • A need for data filtering
  • High availability requirements
  • Security requirements


Specific latency requirements.

IIoT applications have specific requirements (e.g., if a specific assembly line robot malfunctions, it must be shut down within 150 milliseconds after a problem is detected).

Geo-spatial knowledge. 

An IIoT system must be aware of locations and spatial relationships.

Large volumes of data with various QoS requirements.

  • IIoT applications often employ many sensors and video cameras that each generate data continuously, resulting in large volumes of data;
  • Each type of data may have specific Quality of Service (QoS) requirements, such as data rates and bounds on latency.

A need for data filtering.

Applications running in the cloud only need data that allows them to analyze long-term trends. Hence it is not necessary to send all the raw data from sensors to the cloud.

High availability requirements

Since a company depends on IIoT systems to sustain their business, the systems must be reliable. Thus, IIoT may need to employ redundancy (duplication).

Security requirements. 

IIoT systems must be secure from attack, and it must be possible to keep the data they gather confidential.


An IIoT system may transfer multiple types of data, each with its own requirements for performance, availability, and security.


16.6 Communications for IIoT

In an IIoT application data collected from sensors are transferred to different locations like cloud data center, edge devices or fog centers. How to make all these data points communicate effectively?

Solution - Given by OMG (Object Management Group)


An OMG standard known as the Distributed Data Service (DDS) defines a mechanism that allows data from sensors to flow upward through a hierarchy of edge and fog centers to applications using the data. 

The DDS approach has the following characteristics: 

  • Completely decentralized 
  • Suitable for industrial use
  • Publish-Subscribe interactions
  • Flexible data handling capabilities 
  • Support for an edge hierarchy 


Completely decentralized

DDS avoids a single point of failure while minimizing latency by using direct communication

Suitable for industrial use. 

DDS offers the high reliability needed for IIoT applications

It can be configured to meet performance requirements.

DDS offers the ability to authenticate control messages and encrypt data traffic.

Publish-Subscribe interactions. 

DDS offers a publish-subscribe communication mechanism

Allows each application to choose the data the application will receive.

The applications can subscribe to only the data that is required.

Flexible data handling capabilities.

One subscriber may choose to receive all data while another chooses to receive only values beyond a specified threshold.

Support for an edge hierarchy.

    DDS can be configured to form a distributed system.

It can have a hierarchy of cloud centers followed by fog centers and then to edge     data center.


Databus

  • Databus provides interconnections among publishers and subscribers.
  • Databus is a communication abstraction implemented by software. 
  • Software modules arrange to send messages between publishers and subscribers to meet requirements and optimize communication. 
  • Databus technology includes a mechanism known as a gateway that allows a single Databus to span multiple levels of the hierarchy.
  • Rather than blindly sending a copy of every message, gateways perform a filtering function, and only forward a message to another level if one or more applications have subscribed to receive the message.


The following figure illustrates the functioning of DDS databus.