Tell HN: The Quiet Collapse of US Defense Research Infrastructure
I've spent 20 years watching defense research deteriorate from the inside - from Naval Nuclear Lab intern to Raytheon, UARC, and now boutique contractor. What I've seen is concerning, but not for the reasons most might think.
The first wake-up call came at Raytheon. Fresh out of college with signal processing coursework under my belt, I joined a program that would balloon from $150M to $250M, ultimately requiring congressional approval to continue. The issue wasn't typical defense contractor bloat - it was a fundamental disconnect in how we approached technical work. Systems engineers would develop algorithms in MATLAB, throw them over the wall to us software engineers, then vanish to other programs. We'd struggle to translate to C++, often discovering the algorithms didn't actually work. We eventually had to pull a retired expert back to make the core algorithms functional.
The real kicker? The most sophisticated signal processing work was subcontracted out. Raytheon had become primarily a software integration shop. When I and four other recent grads in their fast-track program predicted our site would significantly downsize within 10 years, management dismissed us. Today, that campus has shrunk from five buildings to one three-floor facility. Those other fast-track folks? They've gone on to start their own companies or become tech executives.
I moved to a UARC hoping to do more meaningful work, bringing GPU computing expertise just as CUDA 1.0 was emerging. My pitch was simple: CUDA's backward compatibility meant we could double our speed every 18 months just by buying new hardware. It worked brilliantly - I even won the lab's highest award. I became known for turning academic papers into polished prototypes that management used to secure major programs. But then the system's flaws emerged. A manager circumvented my chain of command to keep me on his program. Despite delivering field-deployed systems (still in use today), when funding dried up, I was stuck. All those relationships with operators and government management evaporated. It's not even the government's fault - their hands are often tied by funding structures.
The PhD years opened my eyes further. Working full-time while completing my doctorate in 4 years, I published 6 papers, won awards, and got promoted to chief scientist at $190k. But without funding, titles mean nothing. I jumped to a boutique defense contractor, secured $2M in grants within 9 months - and walked straight into another systemic issue. I'm managing PhDs who lack fundamental signal processing knowledge, delivering sloppy work that explains why we struggle to convert to Phase 3 programs.
The current state of government ML research is particularly troubling. Everyone's working 3+ projects, spreading themselves thin. Most groups just take off-the-shelf models like Hugging Face and apply them to their specific data. Nobody uses more than 4 GPUs for training because they can't afford more compute. I watched 5 contractors tackle the same MWIR video problem, all delivering similarly mediocre results.
The solution seems obvious: instead of every group rolling their own mediocre models with insufficient resources, we need 1-2 primes building proper foundation models for others to fine-tune. Most of this could be done in the open before moving to classified environments. But the current structure of defense funding makes this nearly impossible. VC-backed defense startups aren't the answer either. They're making the same mistakes - small compute, off-the-shelf models, requiring relocation from experienced 40+ year old scientists who won't move. They're essentially just spending the money the government can't, without solving the fundamental issues.
My former students who left for industry are thriving. The system needs fixing, but I'll be joining them unless someone's building something to actually address these fundamental issues.
Working full time while pursuing a PhD?? How?
I know this is really unlikely to help, but let me throw a few ideas at you and see if any might help.
1> GNU Radio is a way to take flow graphs build in a GUI tool which then get translated into a python program that runs the graphs using a lot of C/C++ libraries to get real time performance at megasamples/second, on really, really cheap hardware. Perhaps there are some applications where it could help?
2> Bit Level Systolic Arrays might be useful for flowing data at high speed (yeah, it's my BitGrid hobby horse)... Instead of a CPU in each cell, it's just a 4x4 bit Look Up Table, and a 4 bit latch (to eliminate timing issues, and make everything flow deterministically).
These are great ideas but the timing is wrong.
1. GNU radio circa 2006 did not nearly support the SNRs and modulation schemes we needed; the tools had simply not matured. Furthermore, by making blocks, its difficult to close the loops hitting theory on timing say with respect to re-demoding blocks after getting loop lock. Perhaps now its fine but the problem is moot now. You can already run cell base stations now off of hardware that costs less than $1000.
2. This is fine as long as i can write high level code and have a compiler do the translation. Otherwise, there exists now sufficient compute power in a low swap package to do the necessary processing.
> But the current structure of defense funding makes this nearly impossible. VC-backed defense startups aren't the answer either. They're making the same mistakes - small compute, off-the-shelf models, requiring relocation from experienced 40+ year old scientists who won't move. They're essentially just spending the money the government can't, without solving the fundamental issues.
Have you tried to express your perplexities to one of the DARPA PMs?
Theoretically, a significant part of their work should precisely consist of receiving feedback on the mistakes made in order to iterate more quickly toward effective solutions.
Yes. The problem is darpa PMs only have a 4 year window. They also have a limited budget of about 80 million. They also have to pitch a program to get hired into darpa and then further have to get buy in from darpa brass to move ahead with thier portfolio.
Moreover, darpa is not the airforce, army, navy, ect. Just because darpa makes the widget doesn't mean anyone has to use it. Also working for darpa is a difficult as they micromanage thier PIs and you have to constantly show progress (even bi weekly) which isn't conducive to research.
My experience has been that Darpa has a romanticized role outside of those who have actually worked for darpa.
As a researcher you are not incentived to tell darpa the truth because if they are not happy with it, they won't give you any money which is already hard to come by.
Darpa PMs are often young, have few publications and are new to the government and have romanticized ideas about what they think they can accomplish.
Is there a way to get in touch with you?
can i email you at your substack? Otherwise, i can reply to one of your threads with my email and then delete it. Let me know so we can set this up to time well.
If anyone's building something to address these systemic issues, I'd love to chat.
There's kind of a product space to support "buyers", specifically calling out your section regarding "software integrator" vs "knowledge worker / domain expert." BTW "buy vs build" is a common engineering management language for this sort of thing I think; https://en.wikipedia.org/wiki/Component_business_model ... there's some silliness here in this space too since software itself is intended to already be maximally flexible; so "sellers" spending time building lots of flexibility is sort of a smell.
To motivate some of the machinery, you know systemically it's kind of a no-brainer for how financing and oversight works I suppose; overseers want to see early results which you might get from a knowledge worker rehashing or tweaking an existing solution, and then integrators get to play 'pick up sticks'.
So you have the integrator is the "buyer" because they have opted to defer product expertise (disregarding whether there's even any real cost, i.e., a FOSS solution, etc. "buy" meaning "not build").
The buy side has popular tools like linters, static analyzers; this is pretty huge. If you look at the "cloud native" space, treating lots of the cloud software players as "integrators" of software that's intended to be distributed, public, always-on, and at a cost of 'ad-supported' the buy side has the "Security and Compliance" layer: https://landscape.cncf.io/ ... a lot of money is flowing there too, since it simplifies precisely what you mention there, so you're in good company. Granted, "integrator work" might be a bit less specialized than the sorts analyses these tools might perform, but it's the same problem applied instead of abstractly "domain expertise" to "domain expertise of deploying existing software solutions."
It might not be popular thinking, but doc tools like CodeViz, doxygen, any worthwhile IDE, and other tools in the space are probably somewhere between the next space:
OTOH managers are sort of 'friendly buyers' for the expertise of their employees, so you can look at the tools of requirements elicitation, definition, capture, scheduling, prioritizing, and lifecycle management too, thinking specifically of like stories, epics, kanban, and presumably legacy systems like DOORS, which I don't have any experience with. If you want to avoid too much philosophizing and want to defer to some pretty broad experience SWEBOK has a lot of words about this more 'pre-compiled' approach rather than 'just-in-time' approach that most agile-type firms will use for their usually simple value props.
The same sorts of things might apply though; identifying whether "coverage exists" for "domain-expert produced prototypes" -- specifying the form of those, in say what you mention, signal processing. Basically, TDD -- does the shipped code match the tests? Does the integrator know enough to write such tests?
Essentially using 3rd party machinery to establish acceptance criteria, and clear targets to ensure that the "buyer-builder" contract is fulfilled.
Thanks for this thoughtful analysis. Your framing of the 'buyer vs builder' dynamic really resonates with what I saw at Raytheon and explains a lot. We weren't just struggling with a technical handoff - we were watching an organization transform from a builder to a buyer of expertise.
The part about 'overseers wanting early results' is especially relevant to defense. Program managers need to show progress to keep funding, which incentivizes quick integration of existing solutions over deep technical work. I saw this firsthand when our most sophisticated signal processing work was subcontracted out.
But here's where defense differs from commercial software: When you're building complex systems like radar or signal processing pipelines, the 'buy' approach has serious limitations. You can't just integrate your way to novel capabilities. Those retired experts we had to bring back? They represented irreplaceable domain knowledge that can't be purchased off the shelf.
The tools you mention (linters, static analyzers, requirement tracking) are valuable, but they don't solve the core problem: you need people who deeply understand both the mathematics and the implementation. In my current role, I see PhDs struggling because they lack this foundational knowledge - no amount of process or tools can bridge that gap.
What I think we need is a model that preserves deep technical expertise while still allowing for integration of existing solutions. But the current funding and organizational structures in defense make this really hard to achieve.
Well, naively, then, what you might be describing is a new language; because if the software itself can't capture the details of the expertise, then that may imply that something simply isn't being expressed.
Languages evolve with expertise, specialties are tailored to and can be permanently and effectively captured; and not just captured, but reused as in the case of libraries.
Perhaps there's some core capability missing from the MATLAB ecosystem that isn't so obvious on the surface. What I know about MATLAB is it is primarily focused on simplifying and making performant DAQ / processing cycles, but not necessarily associating concepts to use-cases and correctness/effectiveness (and probably optimality, as an important thought) -- say with how Objects and Types compartmentalize data transformation effectively for user-facing software applications with understandable UIs.
Apologies for butting in.
Programming language adjustments sound good on the surface but don’t really cut deep enough, as far as the problem domain is concerned.
This is both research and implementation of a military grade solution.
Think about it this way: both the electrical engineering and the mathematics you are combining are (a) cutting edge in their respective fields (for the most part) and (b) cutting edge in their combination. Finding good ways of expressing that Geometric structure in a programming language feature or as a subroutine library is 10 years and many more applications (read: real world tests) down the road from where original poster’s work takes place for the government.
And ipunchghosts seems to have encountered what other people in his position convey behind closed doors: The researchers are sacrificial pawns and even sacrificial chess queens. Your mileage may vary, of course.
Edit: Typo
Again, probably pretty naive take (and without regard to the amount of or quality of outcomes), but when looking at the way academic research occurs, thinkers are often exposed to failures and asked to explain them or expound upon the underlying philosophy -- rather than to participate in the full engineering cycle; that can take different forms such as hands-on development, but particularly in engineering, viewed with skepticism.
I think there's even a take that you can tease the patent system as a way to profit from failing to explain ideas effectively.
Edit: maybe the consideration then is what are the roles missing? if there's a way to improve off-the-shelf model performance faster than moore, what is it? scaling to teams that specify more specialized roles; simulation, model architecture / quantization specialist, systems-level, hardware-match specialist / minimization? some sort of way to compose, or perform operations on the content of models?
I guess im wondering if other researchers in DoD are having the same sentiment as me. I find it hard to believe they arent but it's something that not talked about much because from my point of view, there arent many researchers with 20 YoE left in the DoD. If i am wrong, point me to them as i want to join their ranks!