I'm a little surprised by their approach. I mean, it did work, it is cool, and it is the most important thing. Still I can't stop thinking that I wouldn't sleep before I wrote an assembler and a disassembler. Judging by the presentation they had no assembler and disassembler for several months and just lived with that.
asm/disasm can help to find typos in listings, they can help to find xrefs or even to do some static analysis to check for mistake classes they knew they could make. It wouldn't replace any of the manual work they've done, but still it can add some confidence on top of it. Maybe they wouldn't end with priors 50/50 for the success, but with something like 90/10.
Strange. Do I underestimate the complexity of writing an asm and disasm pair?
I'm with you. I feel like having automated tools - even though they aren't certified - would be an improvement over doing it all manually in both time and reliability.
He mentioned a few times that writing an assembler was a no-go.
It would have taken much more time than they had available, and since an assembler would be a new tool, it would have required certification. (So, even more time and paperwork.) Plus, they had incomplete docs and there is no working copy or simulator of Voyager here on Earth. So any assembler written would by definition be incomplete or inaccurate.
Well, it was a totally bespoke CPU, and we don't have any working models on earth.
Writing an assembler for a bespoke CPU is one thing, many of us have done it as a toy project, but stakes are a bit different here. You'd have to mathematically prove your assembler and disassembler are absolutely 100% correct. When your only working model is utterly irreplaceable and irrecoverable upon error, it probably takes a lot more resources to develop.
No but the time it would take to build the assembler and validate its output would take more time than just writing the patch by hand. It’s for a craft that isn’t going to last more than 5 more years tops anyway.
I think what fascinates me the most about all of this is how there are wide gaps in how much design and engineering documentation from that time period has survived to present day. For a long time, I just assumed that NASA owned and archived every design spec, revision, research paper, memo and napkin doodle related to their public-facing missions. I learned recently that even a lot of the original Gemini and Apollo program code (let alone source code) and docs are apparently gone forever.
henry s f cooper's book _the evening star_ is a great description of the Magellan probe (the venus orbiter), and how they were debugging what turned out to be OS race conditions on a spacecraft millions of miles away
I'm a little surprised by their approach. I mean, it did work, it is cool, and it is the most important thing. Still I can't stop thinking that I wouldn't sleep before I wrote an assembler and a disassembler. Judging by the presentation they had no assembler and disassembler for several months and just lived with that.
asm/disasm can help to find typos in listings, they can help to find xrefs or even to do some static analysis to check for mistake classes they knew they could make. It wouldn't replace any of the manual work they've done, but still it can add some confidence on top of it. Maybe they wouldn't end with priors 50/50 for the success, but with something like 90/10.
Strange. Do I underestimate the complexity of writing an asm and disasm pair?
I'm with you. I feel like having automated tools - even though they aren't certified - would be an improvement over doing it all manually in both time and reliability.
He mentioned a few times that writing an assembler was a no-go.
It would have taken much more time than they had available, and since an assembler would be a new tool, it would have required certification. (So, even more time and paperwork.) Plus, they had incomplete docs and there is no working copy or simulator of Voyager here on Earth. So any assembler written would by definition be incomplete or inaccurate.
https://danluu.com/cocktail-ideas/
Yes, I have strong reason you underestimate the complexity here.
This is so great, I run into this constantly
Well, it was a totally bespoke CPU, and we don't have any working models on earth.
Writing an assembler for a bespoke CPU is one thing, many of us have done it as a toy project, but stakes are a bit different here. You'd have to mathematically prove your assembler and disassembler are absolutely 100% correct. When your only working model is utterly irreplaceable and irrecoverable upon error, it probably takes a lot more resources to develop.
And if you can't mathematically prove it correct, you're better off doing it in your head?
No but the time it would take to build the assembler and validate its output would take more time than just writing the patch by hand. It’s for a craft that isn’t going to last more than 5 more years tops anyway.
Yes.
“Hello world” takes on new dimensions in this context.
void explore()
and serious latency
I think what fascinates me the most about all of this is how there are wide gaps in how much design and engineering documentation from that time period has survived to present day. For a long time, I just assumed that NASA owned and archived every design spec, revision, research paper, memo and napkin doodle related to their public-facing missions. I learned recently that even a lot of the original Gemini and Apollo program code (let alone source code) and docs are apparently gone forever.
Puts things into perspective. I often wonder how so many people survive without a UI debugger because cmdline debugging seems too clunky.
henry s f cooper's book _the evening star_ is a great description of the Magellan probe (the venus orbiter), and how they were debugging what turned out to be OS race conditions on a spacecraft millions of miles away
[flagged]
[flagged]
Pff... and I can debug a stupid bug from 0.00001 miles for the 3rd day.