Ransomware and SQL Server – how to protect data

What is ransomware

Ransomware is a type of malicious software that locks access to user files until a ransom is paid. Quite common “locking mechanism” is by encrypting all files on a user computer, server or even the entire network. Encryption is a process of scrambling original data using an encryption key. The same key is then required to decrypt scrambled data back to its original format. Without the original key, the data becomes unusable. All files on our computer, server or even entire network become a pile of garbage. The only way to decrypt that garbage is to obtain the original encryption key by paying a ransom.

Paying ransom is never a good idea. It encourages attackers to carry on the attacks. We also never know if we are going to get the keys or if the decryption works at all.

How does ransomware spread

Similarly to any other computer virus, ransomware will spread through phishing emails, infected websites and physical media such as USB drivers.

If your computer gets infected it may spread to all network resources that your account has access to. I’m going to ignore any SMB (Windows File Share) vulnerabilities that would allow unauthorised access as this is a whole different subject.

How to protect against ransomware

I was lucky enough to have spent a lot of my career in enterprise-grade Data Centers with thousands of servers where security was a top priority. Hopefully, you will find my tips useful.

Segregate your access

  • Do not use the same Active Directory account for your daily desktop activities like browsing internet and email and server access. Use the dedicated account for any server access and don’t make your SQL Server sysadmin account a local admin account on Windows.
  • Use a limited user account and dedicated privileged (admin) account. Do not use your admin account for browsing the internet etc. This is to prevent any downloaded malware from running under your account with elevated privileges and potentially gaining access to the entire network where admin access is unrestricted. Although UAC has greatly mitigated this risk, there is still some risk and better be safe than sorry.
  • Never give privileged access to the end-users even for a short time.
  • Consider using separate AD forest for production environment to eliminate any cross-contamination with desktop accounts.
  • If you don’t have Active Directory don’t create the same “local admin” account with the same password on every single server. I will write more about the benefits of Kerberos authentication in future posts.

Segregate your networks

  • Leverage physical or logical (VLAN) network isolation and put your servers in a separate network to your workstations and other devices with internet access. Restrict your servers ability to connect to the internet.
  • Have “jump serves” bridging your disparate networks – these could be your existing “DBA Management” servers that you will likely already have.
  • Don’t put your database servers in DMZ. For those serving public applications a two or more firewalls setup may be appropriate:
    Public |> DMZ ||> Databases <|| Workstations.
    This way you are mitigating the exposure of underlying OS to a public network and only opening SQL ports on the first firewall from DMZ
  • Don’t extend your “office” Active Directory forest into DMZ, create a new separate and limited forest. Using Active Directory authentication has security benefits (Kerberos) and improved account management. If you don’t have AD you will have to manage potentially lots of account across many servers.

Patch your servers

I am often seeing SQL Servers that have never been patched or updated since the installation – both the SQL Server and the Windows Server. Microsoft spends a lot of money to ensure the best possible security. Patches are released often for a good reason, use them.

Do backups

  • It seems obvious but I see a lot of servers where backups either fail or are never conducted because of “oh, I didn’t know this database was not included in the daily backup’. Do daily or weekly exception report that shows any production database without a backup.
  • Make sure you backup any users, agent jobs and system objects
  • Backup your encryption keys for any TDE enabled databases!
  • Write down down configuration. Even better – have an inventory.

Protect your backups

Follow the 3-2-1 backup approach:

  1. One local backup (for quick recoveries).
  2. A second copy of the local backup in a local network or separate device. If you have SAN you could automate storage snapshots on the backup drives.
  3. A third copy of the backup in an offsite location – could be online storage.

It is fair to say that any malware will not be able to access online storage without access keys and therefore this would be a fairly secure option. If you are paranoid you can send backups to the archive storage, which I blogged about before:

You can easily implement the 3-2-1 routine in SQL Server for free. The fantastic and free Ola Hallengren Backup and Maintenance Solution or dbatools will give you everything. SQL Server can also write backups to Azure natively since version 2012. I will explain how to implement each in future posts.

Test your backups

Corrupted backups are no good. You have to make sure that the 3 copies of backups you have are valid and you can restore from it. Set up weekly or monthly automated restore testing across your entire environment. This is also possible with dbatools.

How to recover from a disaster

Having valid backups is the first step to success. The second step is having servers to restore these backups to. In cases where entire servers were affected and had to be rebuilt, provisioned or restored, this could take days or weeks. Often a secondary data centre is used for Disaster Recovery to speed up the recovery process. It would server the load until the primary data centre is back in operation.

In any case, the disaster recovery approach will depend on many different factors and your infrastructure, however, no matter what these are, you should be doing Disaster Recovery Testing at least few times a year to make sure that:

  • You know how to rebuild a server and ideally have it automated.
  • You know how to restore databases, especially TDE enabled databases and certificates.
  • Application team know how to restore all applications and that they talk to databases.
  • If there is a secondary Data Centre, everyone has to make sure services can be failed over and that everyone knows how to do it.
  • Everyone knows what to do in a critical situation.

Write down job cards and step by step documents as you will be under pressure and stress recovering from a major outage, the memory will fail and you will make mistakes.

Conclusion

Hopefully, in this very short post, I was able to give you some ideas about how to protect your data and backups from ransomware. Security is critical. If you are unsure how to make it right don’t be afraid to say “I don’t know” and ask someone who knows. Let me know if I have missed anything or if you would like to add any more ideas in a comment.

If you like this type of theoretical approach please let me know and I’ll share more of it.

This post was originally published on May 12, 2020.

0 0 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x