• Forum
  • Lounge
  • Not only is it apparantly possible to co

 
Not only is it apparantly possible to code in machine code, but apple II basic was directly written that way

no idea how on gods earth its actually possible to hand write an asm program on paper, then 'assemble' it by looking at memory addresses in hex and subsequently writing down the code completely in binary (or hex, I suppose, seeing as that would be easier) but apparantly thats how wozniaks ancient integer basic was written.

hell, Id be very impressed with myself to get a hello world program going, let alone a parser for BASIC language, directly written to machine code. many times, ive been told its impossible to go lower level than ASM, but this must be false -- it must not be worth going lower, since it is definitely possible. im not arguing anyone would ever want or need to in todays world.

I guess it would have to be possible, seeing as hex editors edit binary files, simply inputting the right amounts to do what your trying to would ultimately have the same results, but thats just crazy.

even if you know all the details of the processor, its insane.

this MUST have been far simpler in order to be possible on earlier computers with less bits. the Apple II was still quite complex, at the time at least. but I simply could not imagine it being nearly possible to recreate something like this on a modern intel or AMD 64 bit (or even 32 bit) processor with so many instructions.

still inspiring though
Any assembled program can be disassembled into the Assembly program that generated it (almost. If the assembler supports named constants, those are lost on disassembly), and every disassembled Assembly program can be assembled into the machine code it was disassembled from. In other words, machine code and Assembly are representations of the same thing, therefore they're both considered to be at the same level of abstraction.
No lower levels of abstraction exist for software.

I guess it would have to be possible, seeing as hex editors edit binary files, simply inputting the right amounts to do what your trying to would ultimately have the same results, but thats just crazy.

even if you know all the details of the processor, its insane.

this MUST have been far simpler in order to be possible on earlier computers with less bits. the Apple II was still quite complex, at the time at least. but I simply could not imagine it being nearly possible to recreate something like this on a modern intel or AMD 64 bit (or even 32 bit) processor with so many instructions.
I don't understand what you're talking about. If you can write Assembly and you want to write machine code by hand, all you need to do is write your Assembly program and assemble it by hand. To do this you just need the reference.
http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-instruction-set-reference-manual-325383.pdf
That's the entire reference for the x86 and x86-64 instruction sets.
Nobody does this because it's pointless when you have an assembler. Wozniak presumably didn't have one for his processor.
I had an assignment like that once, we were given sheets of paper full of zero's and ones, and had to turn that into assembly, then try to figure out what it did, bonus points for making a C program. We worked in groups, just to confirm we were all doing the same thing, because if you interpreted 1 bit incorrect, the rest of it made no sense at all.

As helios said, it just a matter of looking at the documentation for the instruction set.

Edit: I just looked at the link helios posted, Jumping Juniper Berries ! It has 1515 pages: Crikey :+) Ours wasn't anywhere near as bad as that, but it was only 16bit DOS.
Last edited on
I suppose it would be harder to hand-code an assembler for a modern X86 processor, simply because there are more instructions, but the idea is still the same. On a modern RISC processor it might even be easier.

This sort of thing isn't hard, it's just tedious.
not arguing anyone would ever want or need to in todays world.

You know where it's easy to run into machine codes in the today's world?
1
2
char shellcode[] =
"\xeb\x2a\x5e\x89\x76\x08\xc6........ 
@ Cubbi: I've seen that as a malware vector, but does anyone actually use that legitimately rather than using any of the asm() macros?
@Computergeek01 asm statement would indeed be how you make the first approximation of it (for each target architecture), but it may require further tooling by hand, so at least some ability to manipulate machine code is required.
Just going back to the Original question:

Back in the days of early space travel, I wonder what was used for all that computing ? I wouldn't be surprised if assembly programming featured heavily in that.

Also, back in 1990, I met some people who worked exclusively with assembly and COBOL. They worked for a payroll / Cheque processing company, and talked about how they had assembly modules that were an inch thick when printed on paper.

So, yes there were people who worked with assembly for entire systems.
Last edited on
Back in the days of early space travel, I wonder what was used for all that computing ? I wouldn't be surprised if assembly programming featured heavily in that.


The apollo guidance computers are pretty interesting.
https://en.wikipedia.org/wiki/Apollo_Guidance_Computer
From the article:
The computer had 2048 words of erasable magnetic core memory and 36 kilowords of read-only core rope memory. Both had cycle times of 11.72 micro-seconds. The memory word length was 16 bits: 15 bits of data and one odd-parity bit.


You read that right. We landed on the moon and returned with a computer that had 4k of ROM, 72k of RAM and a clock speed of 86kHz. These days I bet a simple "Hello World" program in C++ is bigger. :)

BTW, there is an Apollo Guidance computer at the InfoAge museum in Wall, NJ. This is a really neat place with several museums running on a very tight budget.
Topic archived. No new replies allowed.