Select the directory option from the above "Directory" header!

Menu
Trouble spotted on the network

Trouble spotted on the network

No sophisticated SOC? You can still be pretty sure that you’re aware of anything potentially troublesome.

We had a scare the other day with a critical cross-site scripting (XSS) attack that seemed to be entirely contained — source and destination — on our own network. Tracking it down and resolving the issue were fairly routine procedures, but it’s worth noting how it’s possible to spot potential security problems when you don’t have a world-class security operations center (SOC) that’s staffed with skilled analysts and stuffed with large-screen monitors and all the bells and whistles.

When you work for a smaller organization, you don’t have the luxury of a 24/7 SOC. In my company, we compensate by building automation into the monitoring of our logs and cherry-picking events that will generate email notifications. Other events get our attention when we can carve out time to monitor the threat logs generated by our advanced firewalls and the security logs produced by a multitude of other devices: web and database servers, load balancers, proxies, file integrity monitoring software, etc. We collect the logs in a centralized server, and a few filters help identify logs that meet certain criteria. I and a couple of analysts take turns monitoring the filtered logs. We don’t get 24/7 coverage, but it’s pretty close.

One of the events that we have decided should generate an email alert is XSS activity. Now, on any given day, XSS attacks against our public-facing resources are a given. In fact, our public marketing websites and applications are regularly subjected to SQL injection attempts, Conficker, Shellshock and multitudes of other attacks, as well as standard port-scanning activity as hackers look for vulnerabilities that they might be able to exploit.

But this particular XSS attack was unusual and worrisome. The source of the event was a PC on our internal network, and the destination was a server on our development network responsible for source code management. There was a good chance that the PC was compromised and attempting to attach other resources on our internal network. Or a rogue employee might be trying to hack our source code. On the other hand, it could be a false positive — for example, a misconfigured application or script that appears to be an attack. That possibility had to be researched first, so we took a closer look at the network traffic and saw the following indicators, which (to me) are indicative of an XSS attack: “…User=weblogic+<script>alert(xss')</script>&NQPassword=abc123456…”

I traced the IP address to a PC running Mac OS that was assigned to an engineer in our India office. Normally, I would first launch a surreptitious investigation, monitoring traffic and conducting some background checks on the employee, but I felt it was urgent to address the issue as quickly as possible. So I contacted the engineer’s manager, who assured me the engineer was a stellar employee, a critical resource on the team and a person of high integrity and moral character. Trusting in that testimonial, I contacted the engineer via email to see what he could tell me about the suspicious traffic.

Fortunately, my trust wasn’t misplaced. His explanation made it clear that nothing as exciting as an internal attack was going on.

Some background: A few weeks ago, we began to deploy our yearly security-awareness training, which covers general awareness issues for all employees and specialized application security training for engineers involved in the development, testing or QA of our products. After taking the module on XSS, the engineer wanted to see if what he had learned would work on a development server on the internal network. And, bad luck, he chose the server hosting our source code repository. Bad judgment, perhaps, but at least the situation showed that our network monitoring is working as advertised.

In fact, I praised the engineer for wanting to test the security of applications that he is responsible for developing. After all, my philosophy is that security is everyone’s responsibility, and when you don’t have a huge security staff, security testing should be encouraged. But I also explained that he needs to coordinate any testing with the security department so that we can provide some oversight and will know to attribute any alerts to his announced activities.

In the end, everyone involved was relieved that the alert had arisen from a relatively benign even, but I also got a good night’s sleep because being alerted in the first place means we can feel fairly confident about our ability to detect real problems.

This week's journal is written by a real security manager, "Mathias Thurman," whose name and employer have been disguised for obvious reasons. Contact him at mathias_thurman@yahoo.com.

Click here for more security articles.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments