Caught between security and stupidity

A simple call to tech support first uncovers an alarming problem, then leads to another festering issue at a client company

Caught between security and stupidity
Thinkstock

I was the on-call tech in IT support at a datacenter administering to small and midsize businesses. One of our clients called to say she was having issues with a remote desktop connection to a server accessing one of their most critical applications.

After attempting to troubleshoot it for about a half-hour, I decided to call the software vendor to ask for assistance. The team gave me the usual spiel: "We have a tech that can help, but it will be about a half hour, maybe an hour before they can get back to you." I said it was fine as long as they knew we needed assistance as soon as possible.

Surprisingly enough, they were good to their word and called me about 45 minutes later. They assisted me with basic troubleshooting for their software (which I had covered myself minus a couple of small details).

A turn for the worse

Twenty minutes into that call, I got the news that would send a chill down the spine of any IT support tech or datacenter tech. "Looks like you have a ransomware attack … Let me show you what I'm seeing."

Much to my dismay, she was correct. Luckily, there was good news at this point: We were not yet managing the servers that were hit -- this was a new client and we were still in the process of transferring support over to us.

I alerted my boss to the situation and completed a scan on the client's actual computer. I then identified that the encryption attack was not, thankfully, affecting our user's computer or network.

It was now evening, and I called the support team’s after-hours tech at the parent company and told them what was going on. The very first question their technician asked me in all seriousness: "How much is the ransom?" I didn't know whether to be ticked off or to pass out from shock. Then he said he should probably call the server tech.

Fixes, but more problems

It took their server tech until the wee hours of the morning, but they found where the infection came from, applied appropriate remediation for this issue, and were back in operation before employees arrived for work.

A month or so later, the parent client who had been victims to the attack officially signed on with our service desk. I was trying to identify their backup software and where the backups were being placed -- and discovered a gigantic mess.

There was no NAS device at all. They had four servers, all at the same location, with each one backing up in one way or another to a different server. The IT department's idea of extended storage for backup files was to place 1TB external hard drives to the servers and “hide them” from the main OS View. They were backed up using very basic software with only the bare minimum of controls.

In the end, we got them set up properly to where their backups were secure and protected. Then we moved on to see what the next day would bring.

Copyright © 2016 IDG Communications, Inc.