How to read large SQL Server ERRORLOG (the 5GB test)

I often come across large ERRORLOG mainly because they are not being recycled frequent enough, or not at all. Sometimes, despite the frequent recycling, the ERRORLOG can grow to unmanageable size. This has recently happened to me during a large transaction rollback over Synchronous Availability Group. Within a few hours, the ERRORLOG log grew to 5GB in size due to “Sending page request for Undo” messages.

The rollback was triggered by the primary node failure during an insert of 800 GB into a partitioned table. The failure caused the primary node to reboot, triggering failover to the secondary replica and, of course, after the now secondary node came back up, the rollback started. At that point, I really needed to see what errors and failures were being registered in the ERRORLOG during the service recovery, which proved difficult.

Using SSMS / Log File Viewer

Unless you are lucky enough to have log management solution such as Splunk or Graylog, the first course of action for most DBAs is to open SQL Server Log Viewer within the SSMS. In my case, however, it gave up quickly:


The out of memory exception is caused by the amount of data returned back to the client. We have to be more clever. Let’s stop the loading right after opening the viewer using the “Stop” button on the top bar, apply the filter and then hit “Refresh”. This should return fewer rows and will work…, right?


Not much luck either. Let’s look into alternative ways.

Using T-SQL

Behind the scenes, SSMS uses the  dbo.xp_readerrorlog  and runs the following command to read the ERRORLOG:

The procedure  dbo.xp_readerrorlog accepts a number of parameters, including a filter. With this in mind, we can execute it in SSMS as a T-SQL code:

This has taken just under 30 seconds to scan through the 5GB log file and returned 387 rows that contain the word “ERROR”. Great, it gives me something, but on the other hand, the output isn’t particularly useful without the corresponding rows:


So, what better alternatives do we have to get a sensible information out of a large log file?

Using SQL Server Management Object (SMO)

We can use PowerShell to interrogate ERRORLOG via SMO:

Using the Measure-Command function, we can measure the execution time of a particular command or function. In this case, the above takes  TotalMilliseconds : 1023529.8419 (over 17 minutes!) to run and eventually returns 387 rows into a GridView which we can filter further:


The slow performance is caused by the fact that it has to iterate through all the rows in the 5GB file to find those matching our ERROR filter. This is bound to be slow and the result is the same as in the T-SQL example above.

Using text reader

The ERRORLOG is a text file sitting on a local disk in the Log folder. We could open it in notepad but that’s not a very convenient way to be searching through lots of lines of text. We can also use PowerShell to read a text file using a highly efficient  Select-String function.

In this case, the GridView opens instantly and continues loading filtered data in the background. In all earlier examples, we had to wait for the data set to be filtered and returned to us, in this example, we have access to the first filtered rows immediately!


Eventually, the data load completes in the background after  TotalMilliseconds : 21949.6703 (22 seconds!)

However, the  Select-String function has few others important features. For example, it has a concept of exclusion and “context” line and can return lines before and after our match:



The winner in performance and functionality category is the  Select-String function taking just over 20 seconds to find matching rows in a 5GB file, with the native T-SQL taking just under 30 seconds to do the same and the slowest being the SMO method taking over 17 minutes! Reading ERRORLOG as a plain text may not be the easiest way and has its limitations. For example, it will only work on a localhost unless we get access to the remote ERRORLOG via UNC path. However, it is definitely the most efficient and the most flexible way, especially if you have big files that you’d like to run some regular expression on.

This site, like most websites, uses cookies. By continuing to use this website, you agree to their use.