EvilZone
Community => General discussion => : D4rKn355 November 09, 2012, 07:36:53 AM
-
I m curious that how are alphabets changed to binary. If they use the Unicode number representing these alphabets, then why aren't they mixed up with numbers? the binary value of numbers can be the same with the binary value of alphabets right? Could you explain me deeply?
Edit: i mean that why the numbers aren't mixed up or confused with the binary of the alphabets. Computer store both of them with 0s and 1s. How can computer differentiate them?
-
Hi,
I am not sure if I understood you question correctly, you are asking how to get the binary value of a latter? Well... first you need to look at the ASCII value of the specific character.
(http://www.asciitable.com/index/asciifull.gif)
for example the capital A has a decimal value of 65 which is 0x41 in hex. You can easily represent that decimal value in binary. As you may have noticed there is no decimal value higher than 255 which means you wont need more than one byte (=8 bits). The capital A for example would look like that: 01000001 . How do you calculate that? Its pretty easy! each bit represents a decimal value first: 1, second: 2, third: 4, fourth: 8, fifth: 16, sixth: 32, seventh: 64, eighth: 128. If you sum all these together you end up with 255 (0 -> 255 = 256 possibilities). As for the example with the capital A 1+64 = 65 and the decimal number 65 represents the ASCII value A. Hope this helps.
Cheers,
RBA
-
I m curious that how are alphabets changed to binary. If they use the Unicode number representing these alphabets, then why aren't they mixed up with numbers? the binary value of numbers can be the same with the binary value of alphabets right? Could you explain me deeply?
Yes, but they are not confused with numbers because of how they are stored.
Technically, everything is 0 and 1 in comp data.
But still we are able to differentiate numbers, instruction codes, addresses etc. it's all about encoding.
read about multiplexers and you'll get it.
-
Sorry That my question is kinda confusing, i m not native english. Anyway back to the topic, @p_2001 that was exactly what i am asking about. Can you go in deep on it?
And @RedBullAddicted it really help me, thanks.
-
Sorry That my question is kinda confusing, i m not native english. Anyway back to the topic, @p_2001 that was exactly what i am asking about. Can you go in deep on it?
And @RedBullAddicted it really help me, thanks.
hmm.. Look everything in a computer comes in 0 and 1. It all boils down to machine code.
It's all encoded in 0 and 1.
now, i don't remember everything the actual encoding but for example. (hypothetical).. I don't remember the actual.
All addressing instruction codes well start with 0.. Now, if it is direct addressing, the second bit will be again 0 and if indirect, it will be 1. So, the register value are individually used to determine exactly what hardware to employ, like the adder, the logical OR etc.
similarly, at higher level encoding is again done to identify what is what.
Like, just eg. 000 before a number would mean it is a string
And 001 would mean it is integer.
So, the bits preceding it will decide what the days is treated as.
For actual encoding look up some books.
-
Thanks bro I get it now.
-
Shameless plug: http://evilzone.org/tutorials/hex-and-binary-becoming-a-better-hacker/
That might help a bit. Cheers
-
The capital A for example would look like that: 01000001 ... value first: 1, second: 2, third: 4, fourth: 8, fifth: 16, sixth: 32, seventh: 64, eighth: 128.
As for the example with the capital A 1+64 = 65 and the decimal number 65 represents the ASCII value A.
sry but issent 01000001 2+128=130???
65(A) sould be 10000010 no????
-
Hi relax,
1 byte (8 bits) can be any number between 0 and 255.
1 1 1 1 1 1 1 1
128+64+32+16+8+4+2+1 = 255
10000010 = 128+2 =130
01000001 = 64+1 = 65
Cheers,
RBA
-
Hi relax,
1 byte (8 bits) can be any number between 0 and 255.
1 1 1 1 1 1 1 1
128+64+32+16+8+4+2+1 = 255
10000010 = 128+2 =130
01000001 = 64+1 = 65
Cheers,
RBA
hmm weird i thought it was counted increasing 1,2,4,8,16,32,64,128
insted of decreasing 128,64,32,16,8,4,2,1
thanks for clearing that up :P
-
hmm weird i thought it was counted increasing 1,2,4,8,16,32,64,128
insted of decreasing 128,64,32,16,8,4,2,1
thanks for clearing that up :P
Read bluchills description. I messed up >.<
-
Depends on the computer architecture. x86 is big endian which means the high order bits are on the left and low order on the right. Little endian is the reverse and is used by Sparq computers I believe, not 100% tho
x86 and x86_64 is little endian. IA64 (Intel's failed x64 version of x86) & Arm can be either Big Endian or Little Endian as you can switch it in program execution. Big Endian means the most significant bit first where as Little Endian is the least significant bit first. Big Endian is like how we write 1.234 in a decimal. Little Endian is the reverse so it would be 432.1.
-
Holy shit I got it backwards. My mistake guys, i'll edit my post. Wow...that was so fail. Thanks for the correction Bluechill