Digital Transformation.
Powered by Security.

Don't have an account yet? Register now to make sure you don't miss any news and to access exclusive content for professionals.
In the fourth part of our series on data residency, we look at the various encryption techniques and examine their advantages and disadvantages.
Messages and information have been encrypted since ancient times. However, modern times are setting new requirements. For maximum protection, it must start as close as possible to the user: Encryption must be user-friendly, function transparently and guarantee a high level of operational security.
In principle, data can be available in the cloud in two different forms:
With increasing digitization, data will increasingly be stored as fields instead of files. It is therefore imperative that both forms are protected: Data in files as long as the cloud is primarily used as a "Platform as a Service" (PaaS), such as with SAP or Microsoft Azure, and data in fields if the cloud is increasingly used as "Software as a Service" (SaaS), such as with Microsoft 365.
Encryption should be simple and user-friendly - ideally, end users should have nothing to do with it. The following locations can therefore be considered for encryption and decryption:
... on the endpoint in the browser
The encryption would probably be implemented as Java script or in a similar form. Since the browser would have to receive keys because the method would be delivered with the Java script and the browser is the location of the action, the solution is only as secure as the browser can protect the key and authenticate the user. This would in fact be possible, but is not usually done due to the high susceptibility to errors. In addition, it can be easily overridden via browser settings and malicious recording would not be particularly difficult either. From my point of view, protecting the endpoint makes no sense.
... via an active component close to the operating system
Software on the endpoint would be integrated into the file system (for files) on the one hand and into the network system (for fields) on the other. The invasive way in which this approach is pursued can hardly be operated in a stable manner: The systems under Windows, Mac, Android et al. are too fast, frequent and different.
... directly in an application
The application on the endpoint can handle the encryption. Microsoft, for example, does this with its rights management for files. However, this approach does not work for fields via a web application. It is therefore primarily possible with files.
... in an inline network component somewhere between user and provider
The communication between the user and the provider is intercepted in between and encrypted and decrypted there. The user is usually unaware of this and the process is fast. However, an "in-between" must be possible. While this usually works well with fields, it is somewhat more complicated with files. This has to do with how the file storage data in the cloud is synchronized between local and cloud. As a rule, the file is not moved in its entirety every time it is changed, but only the changed parts are copied. However, the encryption appliance cannot encrypt the entire file consistently if it only receives individual parts of it. With fields, however, this usually works very well.
... in an active component that talks to the provider
The data arrives at the provider unencrypted (apart from the transport encryption). If the provider now makes this information available via API, an active component such as a Cloud Access Security Broker (CASB) can obtain the unencrypted data via API and return the encrypted data via API. The problem here is that the provider has seen the unencrypted data and has usually stored it at least temporarily. It now depends on the implementation whether the unencrypted data is deleted quickly or not. Again, a question of trust, with no guarantee that this cannot be changed in the future.
... at the provider before the application
The provider offers encryption directly in front of its user interface. This is still rarely done. With this solution, it depends on where the key comes from, how good the encryption is and whether it is ensured that the data does not survive anywhere in an unencrypted state.
...at the provider in the application
The application is built right from the start in such a way that the data is encrypted after the logic layer before it is stored. Larger providers such as Salesforce or ServiceNow can do this, but smaller ones generally cannot. Retrofitting is complex, very expensive and data migration is a rather difficult task. In addition, this solution also depends on where the key comes from and how good the mechanism is. The provider is solely responsible for checking that this is done correctly. An unencrypted shadow copy could be created without any problems.
... at the Provider at Rest
The provider encrypts the data when it is stored in the database or as a file on its file container. This solution must also be analyzed more closely with regard to the origin of the key, the mechanism used and the location of the event (location of the database/file server). Control lies exclusively with the provider.
Maximum security is provided by the option "Inline network component somewhere between user and provider"as the provider cannot see the unencrypted content. However, this solution also has its limitations: The provider can no longer "calculate" with the encrypted data and usually cannot perform any further, more complex functionality. This approach therefore allows data to be anonymized in the cloud. In most cases, this is sufficient.
The decisive factor is where the data is encrypted, with which mechanism, and who holds the key.
Three important elements are involved in encryption:
Encryption is only under control if all three elements can be controlled. If one element falls into the hands of a third party, your encryption can be exploited by this third party, its employees, suppliers, foreign states or cyber criminals. So there is very little room for maneuver here.
BYOK stands for "bring your own key" or "give me your key". The key is usually handed over to the provider, who then uses a mechanism controlled by them at a location of their choice. This mechanism is often proclaimed as a "secure solution". In fact, it only increases the workload for the customer (who now has to manage the keys) without significantly increasing protection. A deceptive package from the provider's marketing department.
BYOE "bring your own encryption" is better. The key, mechanism and location are controlled by the customer and are therefore much more trustworthy. This option also has the potential to solve the residency aspects of your protection needs. Most other approaches cannot do this.
Only the first three encryption approaches listed above significantly increase your security. A "Provider Managed Key" solution has a small disruptive potential for provider functionality. It remains a trade-off of security vs. functionality. Due to multi-cloud, we recommend a solution based on BYOE encryption, which focuses on "Person Identifying Information" (PII). This ensures a high level of protection for personal data and a low potential for disruption to functionality.
You need to be aware of one thing: as soon as you no longer store (sensitive) data locally - and this has long been a reality for most companies - you relinquish control over your data. Good-sounding terms for the encryption types offered by providers, such as "Bring your own key", may intuitively inspire confidence in users, but this is no guarantee of data security. The decisive factors are where the data is encrypted, which mechanism is used and who holds the key.
If the cloud is your strategy, don't miss the next blog in this series. It's all about the multi-cloud and its specific needs. Cheers.
Find out more about trends. After registering, you can download factsheets and other specialist articles from our Trend Sites.
Our experts will be happy to answer any questions you may have on this trend topic.