08-May-2026
Once again, folks, I'm back to discuss a significant data breach.
This time we're discussing the breach of Canvas -- one of the largest "cloud" systems that you've probably never heard of unless you're a college student, faculty, or administrator.
Toward the end, I'll discuss the larger topic of data breaches in general and where we are today.
Canvas is a "Learning Management System" or LMS, that is a hub for most things related to teaching. It's kind of an all-inclusive, one-stop-shopping for everything instructors needs to run their classrooms.

This includes, in part, the following:
If your kid is in college there's a good chance they're using Canvas. There are other LMS products as well but Canvas is the biggie with close to half of all colleges and universities.
A hacking group that calls themselves "ShinyHunters" infiltrated Canvas's corporate network and exfiltrated (stole) data concerning many millions of students and faculty across thousands of institutions. ShinyHunters claims to have exfiltrated more than 3.65 TB of data.
They are demanding payment of ransom from Canvas's corporate parent, Instructure, in exchange for not releasing the data publicly.
The timing likely isn't coincidental, either. I can well imagine that ShinyHunters waited until the hectic last weeks at the end of the school year when final exams were underway to "pull the trigger" on their attack. There are critical days for Canvas and their university customers so a timed attack of this nature could strengthen ShinyHunters negotiating position.
At the very moment of this writing, it's not been disclosed how much the ransom amount is, whether or not it was negotiated, or if it were paid or not. When or if I learn of these details, I will update this article.
Update 12-May
So it appears that Instructure, the company behind Canvas, has paid the ransom. According to them, ShinyHunters (the hacking group that breached Canvas) assured them the data would not be released nor would ShinyHunters extort money from individual schools or persons caught up in the breach.
You might (quite fairly) ask how could Instructure be sure ShinyHunters would keep their word. They're criminal hackers, after all. The truth is they can't be sure. But as incredibly ironic as it sounds, these criminal gangs don't want to besmirch their good names. Yeah, I know, I'm rolling my eyes as well.
But there is truth to that. Today's criminal hacking groups are operating more and more like legit businesses. e.g. They offer support in undoing the effects of their attack, they (usually) honor their word, etc.
The reason is simple. Aside from the criminality of the breach itself, they want their victims, er, customers to "get what they paid for" in terms of removing the harm after paying the ransom. If they don't do that, then future victims will be even less likely to cough up a ransom.
It's also not been disclosed at this exact time how ShinyHunters managed to gain access to Canvas infrastructure. But big data compromises like this usually happen in one of two ways and maybe a rare third way.
Exploit a technical vulnerability
All software has bugs in it. There's just no getting around that. Software today is violently complex as I like to say. Large corporate systems like Canvas and many others have multiple millions of lines of code. No one, single programmer knows what all is in there. Even entire development teams can't easily keep track of all that. It's just too much. Once a piece of code is written and passes unit testing, it might never be examined again.
Adding to that complexity is the common reliance on 3rd party dependencies* that are part of nearly all software development today.
* There exists a rich 3rd party ecosystem of open source code that performs specific tasks. No need to reinvent certain functions when you can incorporate them into your project. And that, too, introduces vulnerabilities. We call that a (type of) supply chain vulnerability.
As a result, software has bugs. There are different kinds of bugs, too, just like in real life. Certain classes of security-related bugs can facilitate a compromise via methods such as Remote Code Execution (RCE), Privilege Escalation (PE), and some others.
No one knows how many security bugs there are or where they are. But they are often found in two ways.
Interestingly, A.I. is proving itself quite adept at scanning source code and finding potential bugs. To the extent the source code is "closed" (not openly published) then companies that develop this code have a head start on finding bugs using A.I. bug detection agents. That's a good thing.
But that same technique can be used to scan public, open source code repositories as well. In this case, the bad guys could possibly find exploitable security bugs before the good guys can find and fix them.
Exploit a human vulnerability
Even the most securely designed systems can be compromised when the human vector is introduced.
We call that Social Engineering. In short, that means tricking someone, an employee usually, who has legitimate access to a system into disclosing that access to an unauthorized party. They might be fooled into giving up usernames, passwords, and even two-factor codes that are meant to prevent that or performing an action that could lead to compromise.
Hardening systems against socially engineered exploitation is top of mind today among companies large and small. But it is very, very difficult to secure against this class of threat.
Mitigating measures include cyber awareness training, multi-factor authentication including code numbers, biometrics, physical security tokens, and the use of "code words", and other measures. The idea being, by stacking authentication mechanisms, you can reduce the likelihood of a social engineering attack from succeeding.
That sounds all well and good, but it also significantly increases the likelihood the employee cannot gain rightful access even without being socially engineered.
That's because authentication systems all have a tug-of-war tension between security vs. convenience. More of one generally means less of the other. The trick is finding the balance and using the right tools.
An Inside Job
Although comparatively rare, there have been cases where an authorized person was bribed, coerced, or threatened, to permit access to unauthorized persons. Sometimes it's a disgruntled employee or an employee that sees an opportunity. These insider scenarios are actually pretty rare but it's not zero.
In the case of the Canvas breach, at the time of this writing, it's not been disclosed which of these methods were used. If I were to place a modest bet, I'd go with No 2, exploit a human vulnerability as the method of "first contact".
One of the biggest downsides to today’s cloud-centric ecosphere, especially regarding major platforms like Canvas and many others, is the potential damage from a single successful intrusion.
Security professionals often refer to this as the blast radius. When thousands of organizations or "tenants" in SaaS* parlance, operate under a centralized cloud platform, then the compromise of highly privileged (superuser) administrative systems or credentials can potentially impact many customers at once. Possibly that provider's entire customer base all in one fell swoop.
* SaaS (Software as a Service) is the term that describes a class of software products are cloud server hosted and accessed over the internet using a browser, usually subscription-based, updated automatically, and have cloud-integration. Typically nothing resides locally.
In security terms, that creates concerns around concentrated trust and possible single points of failure (SPOFs). From the bad actors point of view, that's a hell of a payback for comparatively little work.
By contrast, fully isolated per-customer deployments can reduce exposure because a breach affecting one institution could not move laterally across to others. Imagine if Canvas was truly isolated per-school rather than all schools simply being tenants in a much larger system. Had this been the case, the blast radius would have been firecracker sized and not the nuke it turned out to be.
Let's draw an analogy:
In a per-school isolated setup, each school is like a standalone house in the Instructure neighborhood. A break-in at one house would generally stay confined to that house.
In a typical SaaS environment, schools are more like apartment units in a large complex with shared infrastructure behind the walls. Tenants still have separate units and locks, but if the building itself is compromised, there's a greater potential for problems to spread more broadly.
Not a perfect analogy, but that's the general idea.
Other SaaS providers would do well to learn from this object lesson par excellence on things they might do differently. And if I were the CIO of a university, I'd demand such isolation as a condition of doing business.
ARPANET, the precursor to the Internet, was built for an era when openness and connectivity were prioritized over security. When it transitioned to TCP/IP in 1983, the foundation for today’s Internet was established but it inherited the idea of openness.
Most modern security measures were introduced over the following years, incrementally and reactively. The scale of today’s cyber threats suggests that approach is no longer sufficient.
This is why institutional isolation and quarantining with highly conditional access and gated segregation may become necessary. This is actually how things worked before the internet and cloud-based systems became enmeshed with internal systems over the last couple of decades.
The nature and especially the scale of today’s data breaches were not common until internal systems became broadly internet-connected starting in the mid-1990s. The move to SaaS in the early aughts accelerated that trend significantly.
The cybersecurity world is split on the position of making ransom payments illegal.
One the one hand, it may slow ransomware schemes since getting payment could become more difficult. But unless cryptocurrency itself is outlawed, which is a topic that I'll write about someday, then making ransom payments illegal would not necessarily stop them from being paid. It would simply drive the payments underground.
Certain critical organizations such as hospitals, utilities, schools, etc. could face a crippling outage or even extinction if the payment were not made. Yes, they should have had backups in place and, yes, they should have had better security practices. But allowing them to die (as punishment?) is no answer, either. The collateral damage from their death could hurt far more people through no fault of their own.
What needs to happen is strong effective regulation regarding security practices depending on the particulars and nature of the organization. We actually have some of that today. HIPAA is one you've probably heard of that covers healthcare. Other industries like insurance, finance, etc. have their regulatory bodies as well. Unfortunately they don't encompass everything they need to and thus aren't particularly effective.
The sad fact is, most of us have had our personal data breached numerous times. Regarding our data privacy, that ship has sailed and sank in the Mariana Trench as I like to joke.
Over the last 20-ish years when the internet and online systems truly became the way business and government operated, our data has been stored on dozens or even hundreds of databases belonging to as many organizations.
Many of those orgs had no business holding our data in the first place. But as is usually the case, lawmakers are woefully behind the pace of technology, so protective laws and regulations simply don't happen in a timely manner or, hell, at all.
A lot of those orgs were and continue to be sloppy with their security posture. Cutting costs (and maximizing profit if not a gov't org) is paramount to all of them. Security is expensive and unsexy. And it's invisible, unless a breach happens. So no CFO wants to expend any more time, effort, or money than they can get away with.
The US still doesn't have an EU-like GDPR (General Data Protection Regulation). Not that the GDPR is perfect, mind you. It has plenty of flaws. But at least it's something. We can't even manage that little bit here.
And we, the plebs that occupy this land, are the hapless victims of that sloppiness. It's a human rights crime!
Some things not to do
Don't use services that promise to find and delete your data from the "dark web". They prey on fear and are useless. They might find your data and report where they found it, but having that knowledge isn't useful. They cannot delete your data held by bad actors so what difference does it make?
You don't need to pay for credit monitoring or Lifelock, either. Lifelock, especially, is an expensive service that doesn't provide meaningful security. Some of the things it advertises are things you can do yourself for free, such as freezing your credit files.
Since most of our private, personal data is already out in the wild, out of our control, then it necessitates a new focus, a new mindset. That means assuming a post-theft security posture and taking steps to make your already-stolen data harder for criminals to use. Make it "less actionable".
Essentially, you need to lock down your accounts to make them harder to compromise as though all your personal data were printed in the NY Times.
What does that mean?
That means locking down sensitive aspects of our online lives, such as:
Some of these things require a fair bit of technical savvy to do them correctly. I can help you establish a security posture that offers increased resistance to compromise and teach you how to update and maintain that posture going forward. Teach a man to fish.
Please see my relevant articles.