We were doing the intro lesson to the internet simulator, and a student asked me why the computer even needed the decimal number, and why couldn’t it just take the binary and translate that directly into text. I tried explaining that the decimal level of abstraction is necessary because the computer does not know whether you, the user, want the decimal number, or want the English character, and a higher level of abstraction takes the decimal number and translates that to the character for you.
But he said he didn’t get why the computer couldn’t just go straight from binary to English, and I spent 10 minutes stumbling over an explanation before I said I’d get back to him.
Can anyone provide a more detailed explanation, or resource, to help explain this? Thanks!
Hi @linda.henneberg,
I love when students come up with questions that challenge us and make us scratch our heads. 
In short, I don’t know the answer to that question haha. I did a quick Google and here’s the closest I could find: https://stackoverflow.com/questions/6826516/how-exactly-does-binary-code-get-converted-into-letters
My interpretation is that a computer doesn’t necessarily convert to decimal in between - and ultimately it depends on the program/protocol being used. I think the decimal go-between is a concept that helps us as humans abstract it, but maybe not necessarily what’s actually happening. Seems like the computer takes the given codepoint mapping and goes directly from binary (or hex in that linked text file?) to symbol.
So my guess is if there’s any go-between, it wouldn’t be for technical reasons, but for human-understandability reasons.
Definitely, if there’s anyone who has an actual understanding of this, please chime in!
1 Like
I also don’t have an answer–but the way it makes sense in my head is to tie it back to “metadata” as in…the binary is always translated as a number value, but then “we” have to decide what those numbers mean–are they colors/pixels? sound information? letters/text?
So, I don’t know that it is necessarily turned into decimal by the computer, but if we are trying to interpret it, we would want to acknowledge that intermediary step–that binary IS a number, but we are simply reading it a specific way.
I think they are right in the a computer never translates Binary -> Decimal -> ASCII. When a computer sees a byte of data as 01101110 and is told to output it as an ASCII character, it simply maps the 01101110 -> n. Only us humans can’t look at 01101110 and see the n so we want to know that 01101110 is 110 in decimal (something I understand) and in the ASCII table 110 an n.
2 Likes