[ Friday, July 31, 2015 ]
Social Media and HIPAA:
Jeff [11:15 AM]
I haven't given a speech on medical use of social media in a couple of years so I haven't been thinking about it, but it seems to keep coming up. Here's a decent articl
e highlighting the risk that what you think is "de-identified" isn't.
[ Tuesday, July 28, 2015 ]
Bleg (blog-based beg):
Jeff [11:43 AM]
If you like the blog, go here
and nominate it for the niche/specialty category.
[ Monday, July 27, 2015 ]
Georgia CCSP Breach:
Jeff [11:10 AM]
A state senior services organization suffered a data breach,
apparently when an email was sent, that included diagnosis data for about 3,000 people. Apparently no social security numbers of other ID-theft type of data was included in the breach.
[ Tuesday, July 21, 2015 ]
Cell Phones in the OR:
Jeff [1:12 PM]
I saw the headline for this article
in the Atlantic, but when I read it I saw it wasn't focused on what I perceive to be the bigger problem. The Atlantic is looking at the "don't text and drive" aspects, while my usual concern with texting has to do with security and medical record issues, so I didn't link to it. But then I got my afternoon email from FierceHealthIT,
and sure enough they highlighted the data privacy and security issues OR texting raises.
In my experience, if you call a surgeon's cell phone during normal "operating" hours, you're as likely as not to get someone (a scrub nurse or tech, usually) answering the phone with the phrase, "Dr. _______'s phone." And if everyone's dropped their phones in the same location and a phone goes off with a text, someone's going to pick up each phone to see who got the text. AND, unless you're using secure texting software, the nurse or tech is likely to read PHI that he or she shouldn't have access to. You can see the problem there -- So think before you text, especially in the OR.
So Many Breaches, But So Few Lawsuits:
Jeff [10:42 AM]
Wonder why? It's mainly because a plaintiff in a lawsuit generally must show how he/she has been damaged, and to be honest, most data breaches don't cause calculable damages. And courts tend to throw out cases where the damages are purely speculative, as in these Illinois cases
. Sometimes identity theft occurs and you can prove it. Sometimes, as in the Walgreens case
, actual pain and suffering can be proven. But if the breach merely causes the plaintiff to be at greater risk of identity theft and nothing more, that's going to be a hard case for a plaintiff to win.
[ Monday, July 20, 2015 ]
More on the UCLA data breach here.
Jeff [4:06 PM]
Using HIPAA as an Excuse:
Jeff [4:03 PM]
Interesting article in the NYTimes
on misunderstanding HIPAA. Just wish they wouldn't spell it "Hipaa." It's an acronym; use all caps.
[ Friday, July 17, 2015 ]
Jeff [1:53 PM]
[ Thursday, July 16, 2015 ]
Good News, HIPAA's designed to do just that
Jeff [2:47 PM]
: John Halamka (
primary draftsman of the original HIPAA regulations
Beth Israel Deaconess CIO) and Deven McGraw (current OCR Deputy Director for Health Information Privacy) have jointly penned a commentary at AHRQ
warning against overly-zealous PHI protection that prevents proper data transfers (to other providers and caregivers) or jeopardizes care (when data protection efforts prevent legitimate patient identification or cause mis-identification).
Covered Entities sometimes hide behind HIPAA and refuse to share data when it can be and should be shared. Sometimes there's an underlying commercial reason to resist data sharing; the current issue of EHR non-interoperability is a good example of that. Sometimes it's well-intentioned overzealousness. Most of those incidents involve someone misconstruing HIPAA's restrictions, and it's led some critics to say that HIPAA needs to "keep up with the times."
As John and Deven point out, though, "HIPAA's framework may need to flex and bend to meet the needs of a new health data ecosystem." The good news is that HIPAA's original and current framework do just that. What's reasonable and appropriate, what's the minimum necessary amount of PHI, what technology is safe and appropriate, and what safegaurds are reasonable all change with the circumstances, including changing technological capabilities, risks, protections, and options.
HIPAA's technological neutrality, scalability, and reasonableness standards ensure that it's always up to date. Be safe, keep your data secure, and err on the side of protecting data, but don't harm your patients or hinder the delivery of healthcare to them. If you think you can't share data, double check that impulse, particularly if there might be an ulterior motive for refusing to share the data.
UPDATE: I brainfarted and conflated John Halamka with John Parmagiani. Fixed above; in the words of my former governor, Oops.
[ Monday, July 13, 2015 ]
St. Elizabeth (Brighton, MA) breach:
Jeff [1:59 PM]
Not having policies and procedures, not vetting internet-based document storage apps (e.g., Dropbox), and losing laptops and flash drives can cost you a quarter million dollars. At least that's what it cost St. Elizabeth Medical Center
What's interesting to note in the settlement agreement is that it was not simply using Dropbox (or whatever app they were using) that resulted in the violation, it was that they didn't do a risk analysis on whether they should use it. I suspect that if they had done a risk analysis and reasonably determined that using Dropbox was safe (maybe the data was mostly de-identified, maybe the Dropbox access was tightly controlled and audited, maybe some other safeguards made is palatable), OCR wouldn't have fined them, or at least not this much.
Failing to have done a risk analysis on using Dropbox might also indicate that SEMC didn't do other risk analyses; at any rate, not doing one on the Dropbox use eliminates their ability to claim that it was safe regardless.
I can't urge more strongly that you do a risk analysis, and redo it regularly (probably every year, unless you've got a really good reason to wait longer).
[ Thursday, July 09, 2015 ]
Jeff [3:32 PM]
If you follow sports you probably know that a couple of NFL players lost fingers due to fireworks accidents over the 4th of July weekend. But Adam Schefter, an ESPN reporter, just tweeted a screen shot
of Jason Pierre-Paul's medical record showing that his right index finger was amputated. How did the ESPN reporter get the medical records?
NFL players have less medical record privacy than other folks due to their collective bargaining agreement and their individual contracts. In fact, part of the NFL's rules require teams to post "injury reports" every week during the season, which obviously contain medical information. The teams aren't HIPAA covered entities (nor, obviously, is ESPN), but team trainers may be (especially if they are doctors), and facilities where players are treated are. So while they might have to give up some privacy, that's limited. When Peyton Manning injured his neck and missed time playing for the Indianapolis Colts (which led to his release and move to Denver), he said
, "I don't know what HIPAA stands for, but I believe in it and I practice it."
So, how did Adam Schefter get the records?
Breach Notification: Great article
Jeff [12:36 PM]
on when to report a data breach, and why over-reporting can be as bad as under-reporting. Be honest and legit in your breach risk analysis, but be fair to yourself as well. And be prepared: if you report something, you're likely to have to "open the kimono" to OCR. If your HIPAA activities have not been up to par, be ready for some harsh scrutiny.
Big takeaway: Do your risk analysis. Maybe it wouldn't have stopped the breach, you can't prove that, so the excuse won't fly. When was the last time you did a formal risk analysis? Idaho State paid $400,000 because it hadn't done one in several years.
[ Wednesday, July 01, 2015 ]
Cybersecurity: The New Front Line in the HIPAA Security War?
Jeff [2:33 PM]
Some recent headlines have indicated that a majority of HIPAA breaches are now the result "intentional" or "criminal" actions; that may be true, but the implication that the theft of the data is intentional isn't. In most cases involving theft, a phone, laptop, or other valuable asset is the true target of the "intentional" or "criminal" act, not the data on the device.
However, it is true that intentional attempts to steal data have dramatically increased, through cybersecurity incidents. Two-thirds of respondents to this HIMSS survey
said their organizations were victims of some form of cybersecurity issue recently. Obviously, the respondent pool is primarily made up of large healthcare businesses and not small practices, so this could be over-represented; but it's also true that HIMSS members are much more likely to be focusing on, and defending against, cyber intrusions. Smaller operators, like smaller physician practices, aren't as attractive a target in terms of the amount of data that could be stolen, and are also less likely to be as interconnected as a large business. On the other hand, their defenses will be much lower.
A la Willie Sutton, cyber thieves will always target the big players because "that's where the data is." But small providers have just as much to worry about: cyber thieves would like a more "target-rich environment," but might also be attracted to the lack of safeguards and protections in the small provider community.
As always, now is a good time to take a look at what you're doing to find your vulnerabilities, fix your weaknesses, cover your risks, and prepare for bad incidents. When did you last do a risk analysis, and did you address cybersecurity specifically?
[ Wednesday, June 24, 2015 ]
Jeff [5:06 PM]
was just a bored employee snooping on about 5 random patients a day. Seems like no harm/no foul; but it would be really interesting to hear what the employee thought he/she was doing.
[ Saturday, June 20, 2015 ]
Medical Identity Theft:
Jeff [5:01 AM]
There's certainly been a lot of talk about medical identity theft (here and elsewhere) lately, but now we know that it's on the rise
: according to the Ponemon Institute
, these types of thefts are up 22% over the preceding year. Of course, Medical ID thefts still only make up a small portion of overall data breach incidents, it's still extremely troubling, given the potential for life-or-death consequences.
[ Wednesday, May 27, 2015 ]
Beacon Health (South Bend, Indiana):
Jeff [10:09 AM]
Another day, another hack. Today's unlucky victim is Beacon Health.
It looks like only emails were compromised, and so far there's no actual evidence of misuse. No indication of how it happened, but I'd suspect phishing.
[ Friday, May 22, 2015 ]
Next Phase of OCR Audits:
Jeff [2:11 PM]
Have you received a survey notice from OCR in the last few days? It appears that the long-awaited second phase of audits if finally rolling out
. Nobody outside OCR knows for sure if these are all that are coming, or if everyone who gets a survey will get audited. So, if you got a survey, it doesn't necessarily mean you'll be audited (but it's more likely than if you didn't). And if you didn't get a survey it doesn't necessarily mean you're in the clear (but again it's more likely than if you got a survey).
[ Wednesday, May 20, 2015 ]
Latest Hack Victim: CareFirst BCBS.
Jeff [5:10 PM]
Doesn't sound so bad, but I've learned to wait for the rest of the shoes to drop.
[ Wednesday, May 13, 2015 ]
Indiana State Medical Association Breach:
Jeff [3:00 PM]
The IT chief of the ISMA, of all people, lost a laptop and two hard drives
containing Social Security Numbers and medical information of about 40,000 beneficiaries of the Association's health insurance plan. Apparently they were (i) unencrypted and (ii) left in plain sight in his unlocked car. In addition, he failed to report the theft for 24 hours.
Possible upside: ISMA's insurance is through Anthem, so maybe all of the data had already been stolen when Anthem got hacked.
[ Tuesday, May 05, 2015 ]
Is this a HIPAA breach? A guy had a leg amputated
Jeff [5:20 PM]
, and the hospital threw the leg in the trash, with the patient's name written on it. The cops found it in the landfill and, as you might expect, checked up on the guy, thinking foul play might've been involved.
One could argue that, while the name written on the leg is definitely an identifier, the leg itself is not "information" and therefore this could not be protected health information. However, the presence of the leg (with the idenfitier) implies
some information, even if it's not "information" itself.
I would suspect OCR would consider this to be PHI, based on past experience, but if you wanted to say it wasn't, I'd say you at least would have a leg to stand on.
Hat tip: Ron Holtsford
Baltimore Riots: Anyone know whether CVS suffered a data breach when their store was looted? An inquiring reader of the blog (from
Jeff [4:42 PM]
Birmingham Montgomery, AL) raised the issue, and it's definitely interesting.
My assumption would be that the store operates on some sort of dumb terminal pharmacy information system for the drug records, so that there's no real data stored in any of their drugstores; it appears on the in-store computers while they are being used, but isn't stored there, so that when the computers are powered down and disconnected from the central network, they don't have PHI. Of course, there would be some PHI in the form of paperwork, particularly in the bags of filled-but-not-purchased prescriptions. There might be some other paper records as well. And CVS should have disaster recovery systems to determine whose filled prescriptions were potentially taken, but I'm not sure how well they'd be able to tell if any other paper records were compromised.
Could be an interesting exercise at CVS right about now. . . .
Jeff [4:28 PM]
this time it's Partners Healthcare in Boston
, and 3300 patients are affected. The good news is that the EHR itself wasn't compromised, just some PHI in some email accounts (presumably internal emails only . . . ).
[ Tuesday, April 28, 2015 ]
Jeff [9:20 AM]
[ Monday, April 27, 2015 ]
Another Pharmacy Trashing Patient Information
Jeff [10:24 PM]
: This time it's a small compounding pharmacy (Cornell Prescription Pharmacy in Denver) rather than a national chain, but unshredded paper records in the trash
are the culprit. Importantly, the pharmacy did not have HIPAA policies and procedures in place. No known harm was done, but the fine was $125,000.
"Failure to implement any written policies and procedures" equals $125,000. Key word: any.
[ Monday, April 20, 2015 ]
Workplace Wellness Programs:
Jeff [12:57 PM]
Do you have one? Is it covered by HIPAA? Maybe, maybe not
Big HIPAA fines are coming
Jeff [11:11 AM]
. . . . I keep hearing this
, and I'm sure there will be some doozies.
[ Wednesday, April 15, 2015 ]
More on Medical Identity Theft
Jeff [11:55 PM]
: certainly getting a lot of attention
UPDATE: I've got to say that's a little misleading. "Nearly 60% were the result of theft" makes is sound like there were 600 breaches affecting about 20 million people, where the data was stolen for nefarious purposes. But that ain't true in the least. Yes, that many people had their data impacted by a theft, but the theft was not a theft of the data -- it was a theft of a laptop or flash drive or cell phone or some other piece of technology that could be sold (the equipment, not the data) for a profit. In virtually all of those cases, the thief had zero interest in the data. Those are "crackhead" cases, and in virtually all of those, the thief deleted or destroyed the data at the first opportunity. Theft, yes; theft of the data, not really.
[ Friday, April 10, 2015 ]
Another Individual HIPAA Criminal prosecution:
Jeff [1:49 PM]
this time a hospital respiratory therapist
who accessed patient data. No indication in the article what she was attempting to do with the data, but since it's a criminal complaint, I suspect either identity theft or personally-motivated snooping.
[ Tuesday, March 31, 2015 ]
Jeff [2:18 PM]
[ Monday, March 30, 2015 ]
Jeff [4:10 PM]
[ Thursday, March 26, 2015 ]
Medical Identity Theft:
Jeff [9:25 AM]
yet another story
on how it's the fastest growing type of identity theft. Some good points about your strategy for preventing it: it should include
- Encryption: at least consider it, but realize that sometimes -- for example, in the Anthem hack, the hackers got access to system administrator accounts, so they had the encryption key anyway.
- Data Loss Prevention: DLP encompasses several concepts, such as software to analyze data access and use, and systems to see when data is moved into or out of the system. It is always a good idea.
- Cyber insurance: check out prices and see if it's worthwhile to you. But make sure you know what you are buying: what is covered, what isn't, who pays first dollar, who picks the breach response and rehabilitation vendors, and where does the coverage end.
[ Tuesday, March 24, 2015 ]
A "Security Culture":
Jeff [4:25 PM]
does your hospital have one? Here are four traits
common to hospitals with a security culture, at least according to Sue Schade of FierceHealth IT. Although I think the middle two have a whiff of rent-seeking, I can't argue with 1 and 4.
[ Monday, March 23, 2015 ]
2015: Year of the Healthcare Data Breach.
Jeff [10:32 AM]
It's sure in the news
a lot right now.
[ Thursday, March 19, 2015 ]
What To Do If You're Hacked: These should be self-evident
Jeff [2:21 PM]
, but might not be. They are all elements in a decent "Breach Incident Response Plan." Do you have a BIRT? You should get one; email me and I'll give you some help.
Target: I hear (h/t Lynn Block) that Target has settled the class action for $10,000,000, offering up to $10,00 to any individual who can prove damages caused by the breach. That's certainly a lot cheaper than I would have expected, and I suspect it's well below the "brand damage" losses Target suffered already.
Jeff [1:44 PM]
[ Tuesday, March 17, 2015 ]
Jeff [6:07 PM]
Medical Identity Theft in the News: Here's a story
Jeff [4:30 PM]
on how healthcare data is a treasure trove for data thieves. But here's a much better one
(they got some really smart guy to talk about it). And I was also interviewed by Scott Crowder of KTRH Radio in Houston (AM 740) this morning, for inclusion on the radio sometime tomorrow.
[ Wednesday, March 11, 2015 ]
Jeff [2:57 PM]
Jeff [2:39 PM]
[ Wednesday, March 04, 2015 ]
Jeff [3:47 PM]
[ Wednesday, February 25, 2015 ]
Hippler (East Texas Hospital Data Thief) Gets 18 Months:
Jeff [8:08 AM]
Like Gibson years ago, a healthcare worker who stole PHI to use for fraud gets 18 months in jail
. Two interesting points: first, the US Attorney's office isn't saying what hospital he worked at, and in fact the case has been weirdly under the radar. Secondly, is there a better name for a HIPAA violator than Hippler?
[ Wednesday, February 18, 2015 ]
Health Data Identity Theft:
Jeff [10:55 PM]
Interesting article from NPR
on the black market for stolen health data.
[ Friday, February 13, 2015 ]
Jeff [2:24 PM]
Looking for de-identification tools? Here's a good place to start
, with some proprietary and open source options. Nice to see my hometown UT-Dallas in the mix here.
A big hat tip to Daniel Barth-Jones (@dbarthjones) for this link.
One More Anthem Thought: What if the hackers were really looking for a needle, not the haystack? What if they weren't after tens of millions of medical identities to conduct identity theft or something else, but were really looking for specific information on a handful of specific individuals, and only accessed the huge amount of data to cover their tracks? If it really was a hack by Chinese nationals operating under the guise of the Chinese government, wouldn't that make more sense?
Jeff [12:27 PM]
I'm not saying I believe it was the Chinese government, any more than I believe the Sony hack was the North Korean government. Or, if actually were, I'm not buying that the Norks were so torked about a Seth Rogan film they'd waste their resources hacking Sony. I'm betting if it was them, they were looking for something else, perhaps something they could use to extort someone.
I'm not paraniod. Really.
More Anthem: Lessons for IT Leaders.
Jeff [12:21 PM]
Encryption decisions (really, any protective decisions) can have much greater consequences tomorrow than you realize today. And it is very important that you know the value of your information: not just the value to you, but the value to a hacker. You may think the information you hold is mundane, but what you think doesn't really matter. What matters is what the predator thinks.
[ Thursday, February 12, 2015 ]
More on the Anthem Hack, and "What It Means":
Jeff [10:50 AM]
I've posted several posts
on the Anthem hack, and I'm not the only one. AHLA sent out an email to its HIT and payers, plans and managed care practice groups explaining the hack and the class-action lawsuits already filed. More news is out today, from experts who think 2015 will be "the year of the healthcare hack
." Maybe, maybe not, but the news does bring a few additional issues to mind:
First, as the AHLA email points out, the initial lawsuits and some of the initial reporting point to the lack of encryption as a big factor. Some have indicated that the Anthem hack may cause HHS to harden the encryption requirement of the Security Rule (as you know, encryption is not a required element, only an addressable one, and HIPAA covered entities are free to forego encryption if they reasonably determine it's not right for them). However, the hackers apparently got user credentials; even if the data had been encrypted, the hackers could have used the credentials to de-crypt the data. The fact that encryption would've been irrelevant probably won't stop those claiming encryption should become required, but it's worth considering.
Secondly, some of the reporting is highlighting the "monetization" issue, which I've always seen as the issue. The hackers probably don't want the data because they're going to use the data; they want it so they can sell it to someone else who will use it for identity theft. If that's the case, there is a multi-tier market, which could be good or bad: as the data changes hands, it's harder and harder to catch the initial culprit; on the other hand, if there are several steps between the point of theft and the point of use, there are several opportunities to put systems or safeguards in place to catch the actors and/or prevent the improper use. In other words, you might not be able to stop the thief, but if you can stop the purchaser from using the stolen data, the criminal enterprise falls apart. Something to consider.
Another issue I hadn't thought about previously: not only can the stolen medical identity be used to obtain needed healthcare services (an impostor uses the stolen identity to directly receive needed healthcare services), the stolen identity could also be used to obtain unnecessary services. I can think of two examples: a stolen identity could be used to obtain Oxycontin or other prescription drugs that could then be resold, or could be used to bill for services that are not actually provided. In both cases healthcare providers would be required to be part of the scam, either unwillingly (a convincing doctor-shopping patient gets painkiller prescriptions) or willingly (a doctor bills for services not provided), but that's not inconceivable. My previous thoughts focused on the receipt of actual, needed services, in which case the value proposition is harder to see (you need an ultimate purchaser of the stolen identity who currently needs healthcare services); however, that's not the case, since you could get prescription drugs to sell on the black market. I hadn't considered that.
Finally, I had recently heard that while social security or credit card numbers don't bring much more than a couple of dollars each on the black market anymore, a stolen medical identity might be worth $50. In today's news from Reuters, it seems that a stolen medical identity is now worth only $20. These aren't hard and fast numbers, but still, that's a pretty big devaluation. Maybe the supply of medical identities (and concommitantly, the amount of hacking) is growing so fast the price is dropping; maybe hacker buyers are determining that medical identities aren't all that valuable; or maybe there's really not that big a market of buyers out there after all. I have no idea, but it's worth considering.
[ Wednesday, February 11, 2015 ]
Jeff [6:40 PM]
[ Tuesday, February 10, 2015 ]
More Anthem Fallout: Is Healthcare Particularly Vulnerable to Hacking?
Jeff [11:12 AM]
There are a lot of people saying that; most of them stand to profit if you believe them (including me, in fact). The Anthem breach gives an opportunity for a bunch
of news articles
on just this point
. Let's consider this for a moment.
Much hacking and phishing is aimed at access to quick-value money: credit card numbers that can be used right away (with the victim perhaps not knowing about the use until the bill comes, or perhaps not even noticing it when the bill comes), actual bank account or financial acount data so current funds can be withdrawn, phony checks written, etc. In this type of hacking, the reward comes quickly to the hacker, but might be small change and is usually not a long-term proposition.
Some hacking is designed to allow for real identity theft: the hacker acquires a social security number and other information, impersonates the individual to obtain credit cards, car loans, even house loans, runs up big debts, and when the credit card company or bank tries to collect, the impostor is gone with the loot and the victim is left to try to prove that it wasn't him that got/used the credit card, loan, etc. The reward takes longer, but can be much bigger than snatching a credit card number.
With regard to both of these types of hacks, the victim, the bank or credit card company, and the vendor at which the stolen credit card is used are all incentivized to prevent the hack, since all of them stand to suffer substantial harm: the victim's credit might be ruined (or he might pay for something he didn't get), and the bank, the credit card company, or the later vendor might be left with the bill.
Health records sometimes contain credit card numbers, but often don't, making them not particularly useful for the first type of hack. On the other hand, health records usually contain social security numbers and other demographic data that can be useful for the second type of hack. Thus, medical records might be useful for traditional identity theft schemes.
The much bigger risk, and what medical records are particularly well suited for, is medical identity theft. This type of hack targets patients with good insurance, and allows someone to impersonate the insured and receive the insured's health benefits. The impostor gets free or reduced cost healthcare, but unlike most other hacks, the "victim" (the person whose data was stolen) doesn't necessarily suffer (or at least doesn't suffer immediately); in fact, the victim might benefit, since the impostor might actually pay a part of the victim's annual deductible. Additionally, the person whose data was stolen is not in a very good position to know it was stolen, unless he regularly checks his EOBs (frankly, even if he scrupulously checks his EOBs, they can be hard enough to understand that the medical identity theft might not even be noticed). Rather, the immediate victim is the insurer, who pays for care for someone who did not buy insurance. And if the insurer discovers the identity theft, the care provider becomes the victim, since the insurer may try to recover the funds paid to the provider for the imposter's care.
Unlike a stolen credit card number, which can be used to purchase almost anything (including cash cards), a stolen medical identity is not as easy to immediately monetize. However, the lower level of vigilance by the potential victim makes medical identity theft easier to pull off.
More importantly, however, the risks of medical identity theft far outweigh the risk of credit card theft or regular identity theft. An impostor who receives care while posing as the insured will leave behind a medical record that might be relied upon by some future healthcare provider. Perhaps the impostor is not allergic to penicillin, but the insured is; the impostor receives care at a hospital and the medical record says the patient may have penicillin. When the real insured shows up, tragedy might occur. Thus, while regular identity theft might cause financial ruin to its victims, medical identity theft can kill.
Does the Anthem hack indicate that an epidemic of medical identity theft is on its way? Most criminals are looking for quick cash, and medical identity theft doesn't offer as quick a reward as access to a bank account or credit card number. However, given that there is profit to be made in medical identity theft, and the risks are much greater, healthcare providers, insurers, and patients should all be on high alert for signs of it, and be prepared to quickly respond.
Anthem Breach: Secondary Impacts on Employers.
Jeff [10:06 AM]
One thing to think about when you hear of big insurers being subject to a data breach: in many cases, while the company usually does have a great deal of insured beneficiaries (either through direct insurance purchases or fully-insured employers), almost all have a great many more beneficiaries covered as TPAs or otherwise. For example, most Americans with private insurance are insured by employers who have self-funded insurance plans. Those self-finded insurance plans then go and hire Anthem, United Healthcare, Blue Cross Blue Shield, Cigna, Aetna, or some other entity to administer those plans, and those third-party administrators (or TPAs) are usually insurance companies themselves; that makes sense, since they must know how to administer the employer's self-funded plan if they can administer their own insurance products.
So, when an insurer like Anthem suffers a breach, many of the impacted individuals will be direct Anthem subscribers, but more will likely be beneficiaries of some employer who hired Anthem as a TPA of its self-insured plan.
Thus, in addition to pondering Anthem's fate, and what Anthem ought to do, it makes sense to also ponder what those self-insured plans and plan sponsors ought to do. Interestingly, here's an employment law boutique with a blog post
on just that. Something for employer clients of Anthem to consider, for sure, and useful thoughts for all employers with either fully-insured or self-insured/TPA plans. Additionally, it's worth it for employers to start thinking about what they would do if such a breach occurred with their own TPA.
Update: Here's another (shorter) blog post
with an additional good point: check your BAAs to see who is responsible for notifications. Of course, if you are (i) a HIPAA covered entity or (ii) a HIPAA business associate with any possible breach notification obligations, you should already have breach notification communication tools (set channels of communication, form letters, vendors chosen if not actually lined up, etc.) in place, ready to pick up and use.
[ Thursday, February 05, 2015 ]
Jeff [2:22 PM]
HIPAA for Paralegals Webinar: if you're a paralegal interested in how HIPAA works, why providers hesitate to give you medical records you've requested for litigation purposes, or how to get those covered entities to give you those records, you might want to check out this webinar
I'm putting on next week. You can get a 50% discount if you use priority code 15999 and discount code O7839374.
Blogger: HIPAA Blog - Edit your Template