[ Wednesday, October 23, 2019 ]
Jeff [1:45 PM]
[ Wednesday, October 02, 2019 ]
Ransomware Attack on Alabama Hospital: DHC Health System facilities
Jeff [8:32 AM]
in Tuscaloosa, Northport and Fayette all diverting new patients while EMR and other systems were down.
[ Monday, September 23, 2019 ]
Jeff [4:34 PM]
The following is a guest post by Liam
Johnson, who is Editor-in-Chief of the website ComplianceHome.com
. Feel free to comment*, or discuss among yourselves.
*Comments are moderated and may not appear instantly.
HIPAA Compliant in Google Cloud Platform
Is Google’s Cloud
Platform HIPAA compliant? Likewise, is Google’s Cloud Platform ideal as an
alternative to AWS and Azure for healthcare organizations? In this post, we are
going to determine if Google’s Cloud Platform is HIPAA compliant, plus whether healthcare
organizations can make use of it to host infrastructure, build applications and
store files that contain protected health information.
Presently, the use
of cloud platforms by healthcare organizations has increased tremendously, with
the value of the healthcare cloud computing market being estimated to be $4.65
billion in 2016. This figure is expected to increase by 2022 to more than
Will Google Sign a Business Associate Agreement that
covers its Cloud Platform?
The Omnibus Rule
came into effect on September 2013, and ever since, Google started signing
Business Associate Agreements (BAAs) with HIPAA covered entities for G-Suite.
Consequently, Google expanded its BAA to include the Google Cloud Platform.
BAA covers majority of the cloud services such as Cloud Storage, Compute
Engine, Cloud SQL for PostgreSQL, Cloud SQL for MySQL, Container Registry,
Kubernetes Engine, BigQuery, Cloud Dataproc, Cloud Translation API, Cloud
Pub/Sub, Cloud Bigtable, Cloud Dataflow, Stackdriver Logging, Cloud Speech API,
Genomics, Cloud Machine Learning Engine, Cloud Datalab, Stackdriver Debugger,
Stackdriver Trace, Stackdriver Error Reporting, Cloud Data Loss Prevention API,
Cloud Natural Language, Cloud Load Balancing, Google App Engine, Cloud Vision
API, Cloud Spanner and Cloud VPN.
In 2016, Google
partnered with the backend mobile service provider Kinvey, subsequently leading
to the availability of mBaaS on Google Cloud. Connectors to electronic health
record systems that support healthcare apps are integrated into mBaaS.
Is the Google Cloud Platform HIPAA Complaint?
Since Google will
sign a BAA with all HIPAA covered entities, does this mean that its Google Cloud
Platform is HIPAA compliant?
HIPAA has one
overarching requirement, and that is the BAA. It usually means that the data
and security protection mechanisms of Google have been assessed and deemed to
have surpassed the minimum requirement of the HIPAA Security Rule.
Additionally, it means the cloud services Google offers meet the Privacy Rule
requirements, and Google understands its responsibilities as HIPAA’s business
associate. Thus, it agrees to offer HIPAA-compliant and secure infrastructure
for the processing and storage of Personal Health Information (PHI).
Nevertheless, it is
the mandate of the healthcare establishments to safeguard all the HIPAA rules
when using the Google Cloud Platform is being followed. Likewise, they should
ensure their cloud-based applications and infrastructure are configured and
entities are given the duty to disable any Google services which the business
associate agreement does not cover, control the set up to avoid accidental
deletion of data, ensure access controls are implemented carefully, audit logs
are checked regularly and all audit log export destinations are set. Moreover,
care must be taken when uploading any PHI to the cloud to safeguard it is
adequately secured, plus the PHI is not shared with unauthorized persons
Jeff [4:19 PM]
A friend at Stroz Friedberg (a part of Aon) let me know a few months ago that they are seeing a particular uptick in ransomware affecting law practices, but really it's a problem across all industries. As noted from the news out of Wyoming earlier today, ransomware is a particularly big problem in the healthcare industry.
I thought I'd post through what my friend sent, with her permission. But I'd also like to point out the "big 4" words of advice to prevent ransomware or minimize its impact.
1. Patch software regularly. Most malware exploits a vulnerability that has been reported and for which a patch is available. You can't always patch immediately, but do so on a regular basis.
2. Practice good backup management. Having a perfect backup is the golden ticket for defeating ransomware: simply remove the encrypted content and replace with the backup. Modern ransomware variants typically seek-and-encrypt backup files, as well as data files. If your backup files are accessible on the same system, you could lose them too. Multiple serial backup versions, stored offsite, will speed recovery and save you the ransom payment.
3. Map your systems and remove unnecessary connectivity. It's better if an isolated portion of your computing environment is encrypted and not the whole thing. And you need to be able to find how the incident started to clean it up effectively.
4. Train and test your staff to recognize phishing attempts. The phishing attempt that isn't opened is the ransomware event you don't have and don't have to fix.
Anyway, here's the Aon report:
the past two weeks we have seen a significant uptick in ransomware attacks across
all industries involving the Ryuk ransomware. The initial foothold is typically
flagged as Emotet malware, and is usually delivered through a phishing email. The
Emotet attacker then sells its deployment/footholds to a group using the
Trickbot banking trojan. The "trick" refers to the various modules
the malware can dynamically load to augment its abilities. It uses common
vulnerabilities, such as EternalBlue, to spread rapidly throughout the victim’s
environment. The Trickbot group then sells its wide access to a ransomware
group, currently Ryuk (we have also observed Trickbot working with Bitpaymer).
Once the Ryuk group gains access, they interactively move through the environment,
spreading ransomware to encrypt files. They typically also go after backups in
order to block recovery efforts, forcing the victim to pay the often sizeable ransom
in order to restore mission-critical files and systems.
Mitigating Business Interruption
should pay close attention to any anti-virus alerts from their endpoints, with
particular sensitivity to alerts for Emotet/Trickbot since Ryuk or a similar
ransomware is typically a fast follow to these.
In order to minimize the business impact of a ransomware infection, we
recommend the following preventative measures:
employees to be aware of suspicious emails.
email platform account access - MFA, continual log review, etc.
malware detection capabilities within mail gateways.
the users’ ability to enable document macros.
AV is deployed to every machine and all alerts are being collected.
on AV alerts.
that network logs are being aggregated and reviewed for suspicious connections;
Trickbot downloads its payload as a ".png" file.
access and closely monitor admin and domain admin account usage.
not use shared local admin accounts and passwords across machines -- this is an
easy way for Trickbot to spread.
a robust backup process for business critical servers and files such that
back-ups occur regularly, are tested for efficacy, and are stored offline.
Getting Back to Business: Response and Recovery
not power down or reimage infected systems.
DO disconnect them from the network.
machines/logs and contact an IR provider.
the AV solution does not delete the accompanying "ransom notes"
(usually .txt or .hta files) as these are typically used to store a unique code
that is necessary to decrypt the files if payment is made.
on the lookout for other malicious software and persistence mechanisms as the
Ryuk group may install their own malicious backdoors into the environment as
their approach evolves.
a copy of online backups and store offline.
Alternatively, segregate online backups to prevent them from becoming
encrypted or deleted by the attacker.
not discuss the ability or appetite to pay the ransom via email.
Jeff [3:08 PM]
Campbell County Health has been hit by a ransomware attack
, resulting in diversion from the hospital's ER and cancelling of services.
[ Friday, September 20, 2019 ]
Privacy Rights of Minors.
Jeff [8:22 AM]
Interested in knowing a little more about the privacy rights of minors? Here's a webinar.
I might even be able to get you a discount. . . .
[ Wednesday, August 28, 2019 ]
A concerted effort:
Jeff [5:46 PM]
Minnesota healthcare providers band together
for cybersecurity protection.
Jeff [2:18 PM]
apparently, some improper person got access to a Mass General research database
containing information on 10,000 patients. No SSN or other ID Theft treasure trove, but still a reportable HIPAA breach.
Presbyterian Healthcare (New Mexico) Data Breach:
Jeff [2:14 PM]
Looks like another email phishing attack
, where an employee clicked a link on a phishing email and let an intruder into the hospital systems' data. It does not look like EMRs were accessed, just email. However, there can be a lot of PHI in an email system.
[ Tuesday, August 27, 2019 ]
NY Dept of Health Issues Breach Notification Rules:
Jeff [11:23 AM]
The Department has issued a letter
to all licensed hospitals and other facilities outlining a new protocol that requires the facilities to notify the Department, along with other required notifications, of a potential cybersecurity incident. So, in addition to OCR reporting (soon after the incident if it involves 500 or more persons, after year end for smaller breaches), reporting to affected individuals, and possibly reporting to credit reporting agencies and attorneys general, add a new recipient of the notice.
Hmm. OK, the Department argues that notifying them helps them spread the word and provide assistance to the victimized organization; that makes sense.
However, notification is required for cybersecurity incidents. The notice says, "A cybersecurity incident is the attempted or successful
unauthorized access, use, disclosure, modification, or
destruction of data or interference with an information
system operations." Attempted? That's problematic. Must every port scan and firewall ping, which are "attempted" access to an information system, be reported? That looks like the Security Incident definition in HIPAA, which is equally overbroad.
[ Friday, August 23, 2019 ]
Part 2 Rule Changes Are Coming.
Jeff [12:01 PM]
HHS issued a press release
yesterday, along with a fact sheet
, outlining some textual changes to 42 CFR Part 2. The proposed rule was issued yesterday, but has not yet been published in the Federal Register; the rule will be open for comment for 60 days. I was not able to locate a copy of the actual text of the rule; guess I'll have to wait for the Federal Register publication.
The Part 2 rules serve as a sort of "super-HIPAA" for federally-assisted substance abuse treatment centers. Unlike HIPAA, which allows a HIPAA covered entity to disclose patient information for treatment, payment, or healthcare operations without the consent of the patient, Part 2 required the patient to specifically consent to the specific disclosure. The rules currently indicate that the consent must indicate the exact recipient; stating that the recipients will be "healthcare providers involved in the patient's care" is insufficient, the Part 2 provider must say "Dr. Jones" or Dr. Smith." The new rules will loosen up this requirements somewhat.
Part 2 also require that any disclosure contain an instruction to the recipient that the information remains subject to Part 2 and cannot be further disclosed. HIPAA, on the other hand, operates under a "horse is out of the barn" structure, where once a disclosure is made to a non-HIPAA-covered entity, it may be further disclosed. This concept in Part 2 isn't really changing, except that it's now clear that if a non-Part 2 provider develops medical records relating to a Part 2 patient that includes information from the Part 2 provider, as long as the non-Part 2 provider keeps the records separate
, the Part 2 records don't subject all of the non-Part 2 provider's records to Part 2 restrictions.
The new rules also expand the "emergency" exception to disclosure of Part 2 medical records. As originally drafted, it was a medical emergency involving the patient that triggered the exception; now, if there's a declared emergency (like a hurricane), disclosure restrictions are loosened. Other changes involve the ability of providers to disclose information to state prescription drug monitoring plans and looser rules for disclosures for research.
The AP article has a pretty good explanation
of the impetus for the revisions, including an explanation of why Part 2 exists in the first place. These changes are really nibbling at the edges and fixing specific issues.
[ Monday, August 12, 2019 ]
Jeff [1:06 PM]
this morning out of Pennsylvania. A patient has sued Lehigh Valley Memorial Hospital Network (LVHN, which is not LVMH, the luxury brand aggregator), alleging that a doctor on the staff who was not treating him, but with whom he had a business dispute, improperly accessed his medical records. He's suing the hospital for failing to prevent the doctor from accessing his records.
This raises a number of issues and possible teaching points.
Access Restriction is Required.
Hospitals do have an obligation to restrict access to PHI to only those persons with a need to access it. Sometimes this is easy -- an orderly or a maintenance worker shouldn't have access to PHI. But sometimes it's tricky; a nurse should only have access to PHI of patients he/she sees and treats, but if the hospital prohibits access to patients' PHI other than those assigned to the nurse, and there's an emergency in another department and the nurse must fill in there, the nurse might not be able to access necessary PHI and the patient's health might suffer. Likewise, doctors on staff should only access the PHI of their patients, but sometimes an emergency consult might be necessary. A pediatrician would probably never provide care to a geriatric patient, but in many cases lines aren't easy to draw.
Thus, providers must consider whether they can restrict access up front via hard-wired solutions like permitting access only to a set list of patients (or classes of patients). Often times, they can't, so they then need to set up some other sort of solution. Usually, this involved a two-part solution: first, the parties seeking access (workforce members like nurses and schedulers, as well as non-employees such as staff physicians at a hospital) must be instructed and trained to only access the PHI of their own patients and never access PHI for which they don't have a permitted need (usually treatment, but possibly payment for accounts receivable or finance employees, and healthcare operations for QA/UR staff). Secondly, the hospital or clinic then needs to have some mechanism to make sure people are doing what they are supposed to be doing, and not improperly accessing PHI. This may involve random checks, regular checks, or the use of artificial intelligence or machine learning algorithms to identify potential problem access issues. The hospital or clinic should then follow up with those whose access seems excessive, and determine if there is a legitimate need. If not, they need to take follow-up actions with the access abusers -- more training, restricted access, or some sanction, up to and including termination for abusive snoopers.
In this case, the hospital may have been doing the right thing; many hospitals need to allow open access to all physician staff members, and if the hospital had proper training up front and post-access audit controls, it's not impossible that this improper access might have slipped through the cracks. On the other hand, if the hospital did not train its employees, did not have policies in place regarding access by staff physicians, and did not reasonably audit to look for abusers and fix improper access problems, it may have violated HIPAA Privacy Rule requirement to restrict access. If the access was to an electronic medical record, the hospital might also have violated the HIPAA Security Rule.
Improper Access May Be a Breach.
Once the hospital knew that the access was improper, it then knew there was a "breach of unsecured PHI," and then had an obligation to notify the patient. If it did not do so without unreasonable delay (and in all cases withing 60 days of knowing of the breach), it violated the HIPAA Breach Notification Rule.
The doctor accused of improper access might also be liable here. He apparently claims that he had a patient-provider relationship with the patient, in which case his access to the PHI might have been proper. Even if he had a patient-provider relationship, that does not give him carte blanche to access the patient's PHI -- the access must still be for a permitted purpose such as treatment or payment (and if it's for payment, it must be limited to the reasonably necessary amount).
Don't Disclose PHI to the Press, Even if it is Already Disclosed.
I'd also note that both the hospital and the physician have (appropriately) not commented to the press on the matter, but their comments (acknowledging the patient was a patient is, in itself, a disclosure of PHI) were taken out of court filings; generally, disclosing PHI in a court record, where the disclosure is relevant to the litigation, is a permitted disclosure; it appears that the reporter pieced the case together from the court records. The fact that the PHI is already out in the public record is irrelevant -- just ask Memorial Hermann in Houston.
Even Unidentified Information Can Sometimes Be Used to Identify Someone.
It's not central to this particular story, but another interesting point here is that this case shows how de-identifying information is sometimes ineffective, if there are other sources of information that might be leveraged to cross-check and add in identifiers. The Health Department didn't say who the patient was, but included the date of discharge, which the reporter was able to connect to the court filings. It's not absolutely certain that the specific patient mentioned in the Health Department report is the plaintiff patient in the lawsuit, but it's pretty likely.
In Litigation, a QPO is Always an Option.
Often, when PHI is used in litigation, the individual who is subject of the PHI will seek to prevent his/her PHI from being in a public record, in order to keep his personal medical issues private. This can be done with a Qualified Protective Order or QPO, as specifically mentioned in the HIPAA regulations relating to information disclosed subject to a subpoena. Here, the information in the legal proceeding actually ended up being used by the press to the detriment of the hospital and physician. I'm guessing that LVHN, and possibly Dr. Chung, are wishing they had used a QPO to protect some of that PHI.
[ Friday, July 19, 2019 ]
Jeff [3:39 PM]
As you should know, while HIPAA has pretty strict rules for most covered entities, those that provide services in the substance abuse arena are often subject to even more strict rules. Called the "Part 2 Rules" since they come from 42 CFR Part 2, they basically prohibit the disclosure of patient information by federally-supported substance abuse centers unless the patient gives specific consent for the particular disclosure.
As with any privacy rules, the stricter the rules, the worse the utility of the data. And in the substance abuse arena, allowing patient privacy serves a great good (patients won't be afraid to seek care due to the fear their addiction will be disclosed), but that same privacy can prevent programs from providing the help that patients need. This can be particularly troubling in the face of the opioid epidemic.
HHS is proposing to revise those rules
somewhat to allow better sharing of data between providers. It will be interesting to see how it plays out; parties from both sides will be likely to weigh in.
[ Wednesday, July 03, 2019 ]
Jeff [6:20 PM]
[ Monday, July 01, 2019 ]
Jeff [2:42 PM]
Those "obsessed" with privacy (hey, obsession is in the eye of the beholder; one person's reasonable caution is another's obsession) know that if a digital service is "free," the service isn't the product to you; you are the service to the product's developer. Google Maps is a free product, so it seems; actually, though, you are the product: when you use Google Maps, Google gathers data on you that is uses to sell other products to its customers. If you don't care about privacy, it's a great deal: you give up privacy you don't want for a free map program.
But if you do care about privacy, what should you do? Find less intrusive programs. Some are free but some cost money, but that's what you have to do if you want to protect yourself.
Anyway, if you're looking for alternatives to the non-private Google products, here's a list
. Think about it. . . .
[ Thursday, June 27, 2019 ]
OCR has published 2 new FAQs
Jeff [10:14 AM]
relating to when and how health plans may share PHI for care coordination with other plans serving the same individuals. The first question actually alludes to one of the tricky elements of uses/disclosures that are for "health care operations" of a different covered entity: not all "operations" elements are acceptable in those situations. Care coordination is one of the acceptable elements, though, so that's good. The second question delves into when an entity can use PHI that it received for a different purpose to tell the individual about other products and services, without the communication being "marketing" (in which case the individual must authorize that use/disclosure).
[ Monday, June 24, 2019 ]
Jeff [11:36 AM]
Two New York (Southern Tier, Allegany area) health care providers were hit by Ransomware
last week. No word on how the attacks occurred, but I'd guess both started with email phishing schemes.
[ Wednesday, June 05, 2019 ]
Now, it's LabCorp.
Jeff [2:14 PM]
Just days after Quest announces a breach of 12 million patients, LabCorp announces a 7 million patient breach of its own
. Well, not really it's own: like the Quest breach, LabCorp is announcing a breach of its billing vendor, who is the same billing vendor that Quest uses.
[ Monday, June 03, 2019 ]
Quest Diagnostics announces a big breach:
Jeff [3:03 PM]
it looks like a billing vendor, AMCA, suffered the breach
, which appears to be a phishing-based email access hack. It does not look like lab test results were accessed, but billing and financial information (which is still PHI, and would also include some indicia of what medical issues the data subject might have, due to an indication of what tests were ordered and conducted).
[ Thursday, May 30, 2019 ]
MIE breach brings state fines as well:
Jeff [11:29 AM]
Yesterday my favorite HIPAA/Privacy reporter tipped me off to the fact that MIE also got fined by state regulators. MIE is an Indiana-based medical records company, and its clients are spread across the Midwest and elsewhere. In addition to the $100,000 fine to OCR, MIE also paid $900,000
to a total of 16 states (Arizona; Arkansas; Connecticut; Florida; Indiana; Iowa; Kansas; Kentucky; Louisiana; Michigan; Minnesota; Nebraska; North Carolina; Tennessee; West Virginia; and Wisconsin) to settle HIPAA and state law breaches.
This is a good reminder: you can't only look at HIPAA to determine your obligations to protect data and report breaches; you also must look at state laws. Specifically, all states have data breach reporting laws, and most have either personal data protection/security laws or general "deceptive trade practices" laws that contain a privacy component. Thus, your data security activities must be HIPAA compliant and
state-law compliant, and if you suffer a breach, you must look at both the applicable state laws as well as HIPAA to determine your reporting obligations (some breaches require reporting under HIPAA only, some under state law only, and some under both).
Additionally, since the HITECH Act, OCR isn't the only show in town as far as HIPAA enforcement specifically. Even if OCR does not fine an entity, a state can do so specifically for a HIPAA violation, but not for a state law violation
In MIE's state law case, MIE paid OCR for violating HIPAA but also paid the 16 states for violations of HIPAA and state laws (i.e., not just state laws). But, it was an agreed order, so it's hard to tell what would've happened if MIE objected to the fact that, since OCR had already fined them, they should not have state law liability under HIPAA. I assume the states would've dropped the HIPAA part and relied on state law exclusively.
The final lesson: there are multiple regulators. Don't forget that.
Recent OCR activity: Touchstone and Medical Informatics Engineering:
Jeff [10:58 AM]
If you've been watching the news, you'd have seen a couple of recent HIPAA enforcement actions, with some striking differences.
First, as I mentioned below
, Touchstone Imaging got tagged for $3,000,000 for a server issue that left FTP files exposed to anyone searching the internet. Then, shortly thereafter, business associate MIE
got tagged with a $100,000 fine because a hacker got access to their patient files. Why the big difference? I'll discuss in a later post. . . .
[ Friday, May 17, 2019 ]
Jeff [11:28 AM]
[ Tuesday, May 14, 2019 ]
Jeff [5:43 AM]
[ Monday, May 13, 2019 ]
Jeff [11:52 AM]
We've heard it over and over, the healthcare industry is the biggest target for data breaches, given the overall value of data, plus the large number of targets with, shall we say, less than stellar defenses. Here's proof
that those indications are right: healthcare leads in total data breaches and total data breached.
Jeff [11:49 AM]
[ Monday, May 06, 2019 ]
Unprotected FTP servers
Jeff [8:49 PM]
can cause problems, since whomever finds them on the internet can access the data in them. They aren't easy to find, but they can be. Of course, when your initial response is that there was no PHI disclosed, when in fact 300,000 people had their PHI exposed, you should expect a fine.
[ Monday, April 29, 2019 ]
Jeff [8:37 AM]
I haven't seen the actual proposed regulatory text yet, but Modern Healthcare is reporting
that OCR will lower the maximum fine level for organizations that violate HIPAA, depending on the organization's level of culpability. Obviously, OCR could have exercised prosecutorial discretion in levying fines, but it can't hurt to encourage organizations to lower their culpability level.
[ Wednesday, April 24, 2019 ]
Brookside ENT and Hearing Center in Battle Creek, Michigan got hit by a ransomware attack. They didn't pay the hackers, their medical records were lost, and they have gone out of business. The two partners have gone into early retirement.
Jeff [5:47 PM]
So, Ransomware can kill you.
[ Monday, April 08, 2019 ]
Jeff [1:20 PM]
[ Friday, March 22, 2019 ]
Jeff [1:59 PM]
This is a big one
. It also not an OCR settlement, but rather a settlement of a class action lawsuit by affected individuals. Class action cases are hard to bring, but they got a big settlement here.
[ Thursday, March 21, 2019 ]
Jeff [7:06 AM]
[ Monday, March 18, 2019 ]
Jeff [12:46 PM]
According to reports from OCR
. Email hijacking and ransomware are the leading trouble-makers.
Jeff [12:43 PM]
Cyber Risk Assessments or Security Risk Assessments ("SRAs") are pretty common in the privacy universe. In fact, doing some form of an SRA (and regularly repeating/updating) is a required activity for any HIPAA covered entity or business associate. How do you know what types of safeguards are reasonable and appropriate for your business if you don't understand what your risks are? However, before you go off and do one, here are 5 questions you should ask
. (One note: I'd add HITRUST to the "frameworks" listed in question 2.)
[ Monday, March 04, 2019 ]
Jeff [7:47 AM]
[ Saturday, March 02, 2019 ]
Jeff [1:06 PM]
[ Wednesday, February 27, 2019 ]
Jeff [10:36 AM]
[ Wednesday, February 20, 2019 ]
Jeff [10:14 AM]
[ Monday, February 04, 2019 ]
Jeff [10:51 AM]
[ Wednesday, January 30, 2019 ]
Discover noted something funny
Jeff [12:38 PM]
that indicated that some of its cardholders' information was out on the web, indicating that there had been a breach somewhere. Discover's notice doesn't contain much information (more on that in a bit), but does indicate that it wasn't their fault. However, they did replace cards for affected individuals and agreed that they wouldn't be responsible for fraudulent charges (both of which would be true regardless of whether the breach was Discover's or someone else.
Two things to note. First, many state data breach notification laws, but most importantly and particularly HIPAA, require covered entities to report breaches; the requirement isn't to report your own breach, but to report any breach you discover. That's the duty of data holders -- if you know someone's data is breached, let them know. Data breach reporting is not an admission of fault, and most data breaches don't result in fines or lawsuits. The point of breach notification is not (or at least shouldn't be) to tattle on yourself, it's to help out the public whose data is leaked and who might not know about it or how to protect themselves.
Secondly, it's not surprising that Discovery's notice didn't say too much, like what they found or how they found it. Why is that? Because you don't want to give up your data security secrets. If the black hats learn how you found out something, they might learn how to hide it better. Especially if you discovered it via some clever means.
Regardless, it's an interesting notice to get in the millions of data breach notifications.
Update: Jon Drummond is no relation (as far as I know), in case you thought so.
[ Wednesday, January 23, 2019 ]
Oregon wants to pass a law
Jeff [4:23 PM]
to prohibit the sale of de-identified
data without the data subject's consent. That is dumb -- de-identified data does not have a data subject. And if it's truly de-identified, there is no downside to its being shared, at least no downside to the data subject (because, again, there is data subject if it's de-identified).
I understand the "property rights" concept, but it really doesn't work with data. Data isn't a thing like that; data is a fact, and you can't own a fact. The exact same data can be possessed by multiple people at the same time, without diminution of the value to any other holder. Plus the data may only connect to a particular subject in a particular situation.
For example, let's say my birthday is January 1, 1960. 1/1/60 is in my medical record at my doctor's office, which means that data ("1/1/60") is PHI. Let's also say I went to my doctor today, January 23, 2019 (1/23/19), for my annual physical. That data ("1/23/19") is also PHI. Do I own 1/1/60 or 1/23/19? If those data are my property, can I keep other people from using them? How about other people who were born on the first day of 1960? Do they own the data and I don't? Tenants in common?
Now, I do have some interest in the connection between those two dates, me, and my doctor's office, but do I own all that data as long as it's connected?
More importantly, what if you de-identified it by HIPAA standards? All you'd know is that some 59-year-old person went to that doctor's office in 2019. In Oregon, I would still own that data, even though you don't know it's me. There will be other people aged 59 who come to that doctor's office in 2019, and that data will belong to them; how can you tell which data is theirs and which is mine once it's de-identified?
Even if it's not de-identified, the doctor's office should have some
right to the data in its own records. It should not have unfettered rights to do with it whatever it wants (and it doesn't, because of HIPAA and other privacy laws), but it surely has the right to use the data to run its business.
I shouldn't complain -- like the Illinois Biometric Privacy Law, this is good for lawyers. But it's unnecessary and dumb.
[ Friday, January 11, 2019 ]
Jeff [8:34 AM]
A Michigan HIV/AIDS and substance abuse provider has suffered a data breach
after a phishing attack. I suspect this is more of an ID theft issue, but bad news anyway. Interestingly, (i) no word on how many were affected, and (ii) the breach occurred in April 2018 but notification only went out recently; that could be because the breach was only discovered in the last month or two, but one wonders if the 60-day time limit in HIPAA was met.
[ Tuesday, January 08, 2019 ]
Jeff [8:24 AM]
Mintz has a good wrap-up of some of the bigger HIPAA goings-on from 2018 here
[ Thursday, January 03, 2019 ]
Jeff [1:16 PM]
As a bit of an analog to yesterday's post about the impact of a breach on stock price, recently breached companies tend to improve their performance against the market, which might indicate that the breach serves as a "wake-up call" for the company's leadership. Going hand in hand with that thought, Health IT Security notes
that recently breached hospitals tend to increase their advertising spend by 64% after a breach.
[ Wednesday, January 02, 2019 ]
Jeff [3:51 PM]
It's not as big or as consistent as you might think, but it's not negligible either. Paul Bischoff and Matthew Dolan have done some research and posted the results here
Interestingly, companies that suffer breaches tend to be underperforming companies anyway. However, their performance improves after the breach, at least compared to market averages. Low point tends to be about 2 weeks post-breach, but for the following 6 months, the companies tend to outperform the market.
Maybe suffering a breach serves as a wake-up call?
It's a relatively small data set, and doesn't relate much to small and non-public businesses, but it's interesting to ponder.
Jeff [12:56 PM]
[ Friday, December 21, 2018 ]
Jeff [1:13 PM]
As Baylor Scott & White-Frisco (a joint venture between BSWH and USPI) is finding out, a credit card breach is also a HIPAA breach
if it's connected to a HIPAA covered entity. The incident is similar to one that happened at Banner Health in Arizona a few years ago (reported here
): a credit card processor vendor suffered a breach, but it involved BSW-Frisco's patients' data.
Hat tip: Taylor Weems, CIO at Midland Health.
[ Thursday, December 13, 2018 ]
CMS has asked for public comment
Jeff [3:55 PM]
on how HIPAA should be changed. Personally, I'm a "Chesterton's Fence" kinda guy, but I actually think it works pretty darned well as is. But I'll be interested in seeing the public commentary.
Jeff [3:40 PM]
When a hospital fails to cut off PHI access
to a former employee, it can be a HIPAA violation. In this case, a relatively inexpensive one (relative being the key word, it's still a lot of money).
[ Friday, December 07, 2018 ]
Jeff [12:43 PM]
[ Thursday, December 06, 2018 ]
Jeff [10:44 AM]
may or may not be a HIPAA breach, but NY's data breach notification law is likely implicated. It's unclear whether the agency would be a HIPAA covered entity; it's described as a health provider, but if it doesn't conduct HIPAA-regulated transactions in electronic format, technically it might not be a HIPAA "covered entity."
Blogger: HIPAA Blog - Edit your Template