[ Monday, January 24, 2005 ]
I spent today and will spend tomorrow at an HCCA seminar on the Security Rule, presented by Paul Litwak and Robert Nied. The seminar is fairly heavy on the technical part, which is probably fitting since there's a very small presence by lawyers in the room. Most of the crowd is made up of techies from covered entities, looking for advice on what they need to be doing to flange some security onto their systems. I did take some random notes through the seminar, which I'll post here as I made them:
Crap. I would, but for some reason my computer won't let me copy blocks of text. I think it's SBC's fault. I'll have to try from my laptop on dialup.
There is wifi in the hotel where the seminar is, but not in the conference room (actually, it is in the conference room, but only if the whole conference pays $450 for it. You can wifi in the lobby for about $5 for the first 15 minutes and a buck a minute thereafter. You can connect for free in the hotel's business center, but that's gotta be between sets or during breaks, which won't be easy.
UPDATE: The good news: I'm back at the seminar Tuesday morning, connected with my laptop in the business center during the first break, and I can cut, copy and paste. The bad news is that I left my seminar book at home. It's sitting right by my home computer where I left it while unsuccessfully blogging last night. Anyway, here's my notes from yesterday:
Three concepts built into the security rule: comprehensive and coordinated approach; scalable; technology neutral
GASSP: generally accepted system security principles.
Security risk assessment tools:
Risk Analysis is an ongoing process. Re-do it often. Technology changes, and what is risky today is safe tomorrow (and what’s safe today may be risky tomorrow). You also must do a periodic technical and non-technical evaluation of your security plan’s meeting of the Security Rule, so you should mesh ongoing risk analysis with required periodic evaluation.
Policies should not be too technical. They are about process, so they need to be written in a way that they are understandable across the enterprise. They should be approved at a high level, so there’s buy-in up and down the ladder. Try to cover everything you should, but don’t try to cover everything you can. They should be disseminated widely (on your internal web site?). They should mesh between your tech dept and your personnel dept, so you can fire someone for violating them. They should be enforceable, and enforced (so don’t overreach). Update them with some regularity. Allow exceptions, but be careful, and make them temporary, not permanent.
Don’t forget the difference between policies and procedures. Policies are high-level and general, but where you need specific steps outlining how the general becomes specific, that’s where your procedures come in. Policy is a public statement, but a procedure usually is proprietary and should only be disseminated to those who put it into action; you don’t want to broadcast how you are protecting the security of your PHI.
Google: the ultimate hacker tool.
Tell your employees that you are monitoring them. If you’re doing audits and logs, you will have to. Convince them that you’re going to see where they’ve been to make sure they’re not going where they shouldn’t. Fire the first person you see going where they shouldn’t, as an example to the rest. It doesn’t matter if you actually do it, if people think you can/will. At the very least, though, do some sampling. Do that on everything in your policies/procedures: you don’t have to look at every computer, but occasionally look at a random sample of them.
Employee termination: make sure access rights are terminated before the butt of the employee-to-be-fired hits the chair in HR. When they’re hearing the words, “you’re being downsized/we’re going in a different direction/we’re eliminating your department/etc.”, your tech folks should be deleting their logon and password. You probably have an explicit procedure that you follow when you terminate someone (having security escort them back to their office to pack and then out to their car, etc.); add in a procedure to have computer access deleted.
Legal risks of allowing (or not preventing) an employee or other person with access to your systems to use your system to launch an attack on another company: if the other company could trace it back to you, and your employee did it (or your employee allowed the keystroke worm to get in by their free access to internet), they might be able to sue your company for launching the attack. If a lawyer can figure out how to make money out of this, they will. Lawyers look at the world like a piñata; if one can figure out the right place to whack it, where lots of candy falls out, lots of other lawyers will gather around and start whacking the same site. Think of the lawsuits about asbestos.
Vendors: ask them to list all default passwords to the items you’ve bought from them, and block them. Ask them how many security patches they have issued (0 and 100 are bad answers, but a couple or a handful is OK). Ask them how they incorporate GASSP best practices into their systems, what their procedures are to implement security reviews and practices, etc.
Contingency Plans: how quickly will your vendors get operations/applications back up? Will your data be back up then too?
Make sure you identify key systems and back up data there often. Test restoring from your backup sometime, so you’ll know you’ll be able to do it if you ever really need to. Keep your backup media in a separate location, offsite, so the flood that takes out your systems doesn’t also take out your backup. Document and set up a procedure for doing backup, where the person doing the backup acknowledges in writing that it was done. Anticipate what may happen. When you back up power, you may need to back up power for your a/c in the computer room as well.
Have succession planning in place as well: if the disaster takes out not only your systems, but your chief executive officers as well, you need to be prepared so you know who will be the person leading the effort. Plan on backup vendors who can provide replacement hardware and software if yours is knocked out and your primary vendor can’t perform.
Keep a copy of the disaster plan at home; if it’s destroyed with the office, it won’t be much good.
Documentation best practices: keep copies of access requests and approvals (when someone wants approval to access your systems), and copies of any responses to problems. These aren’t legal requirements, but best practices.
Jeff [11:50 PM]
Blogger: HIPAA Blog - Edit your Template