Trusting Everybody

My security brain says I shouldn’t run untrusted code on trusted machines. My developer brain says that I should download 800 transitive dependencies from, first execute their build scripts, then also compile and run the contained code on my own trusted machine. It’s the efficient and economical thing to do, right?

I’m now in the weird situation that I will embrace open source while I’m at work for expedience, but dabble increasingly in proprietary development platforms in my free time. You can write all kinds of macOS apps using the frameworks provided in Xcode without having to reach for CocoaPods or Swift Package Manager. You can write very useful ASP.NET web applications using only packages provided by Microsoft. It is so liberating to say “You know what, I trust Apple on their own platform. I trust Microsoft on their own platform. I can build and run my code and not have to worry about who wrote this library or how it’s being delivered to me.”

The irony is that this is supposed to be open source’s strong point—the idea that anybody can audit the code. The truth is few people do. This plays out repeatedly. I recently discovered a directory traversal vulnerability in a relatively popular Gemini server. Once the fix was reported a few people tested the same issue in other servers and found the same bug, leading to a rash of hurried patches to various programs. Most likely those vulnerabilities would still be there if I hadn’t made my original report. That doesn’t necessarily mean that proprietary software is any better, but when a company is involved it does mean that somebody’s job is on the line, which it isn’t for open source. That’s at least some motivation both for getting it right and for fixing any issues that arise. I just want to get my code from a trusted vendor.

For example, Rust works against me here. They actively avoid putting anything in the standard library that could be done better by somebody else. When I want an mpsc channel the conventional wisdom is that I’m supposed to avoid the the one in the stdlib and use the crossbeam crate because it’s faster. Who maintains this crate? Why do I trust them? Will I continue to trust them in their future releases? Probably everything’s fine but honestly it’s a crapshoot whether I can evaluate this seriously.

It is logical why open source projects go down this path. They have limited resources. You want to provide maximum modularity, maximum plugin functionality so that the experts in any particular domain can contribute their relevant part. For the end user this creates an impossible trust situation—who are these people? Why should I trust them?

My prediction is that this problem is going to get worse, not better, with increasing numbers of malicious packages and package takeovers. The natural result for open source should be that projects take a much more batteries-included approach, and commit to providing a much wider range of functionality under their maintenance.

If I can only get that from the likes of Apple and Microsoft for now, so be it.