![]() Registered Member ![]()
|
I was dismayed to see this code in libktorrent/util/constants.h:
This has a rather high probability of being wrong on a 64-bit system, with possible bad performance implications for SHA1HashGen::processChunk, which callgrind profiling says ktorrent spends most of its time in. (For example, there are a lot of bitwise ANDs with 0xFFFFFFFF in that code, and these make absolutely no sense if Uint32 is actually 32 bits.) Can't we use inttypes.h or stdint.h or some Qt typedefs for this? |
![]() Moderator ![]()
|
I thought int was allways 32 bit, and that long depended upon the architecture.
I guess we can replace them by something else. EDIT: I used to work in 64 bit mode (I have an amd64 3500), and a long time ago, we were using unsigned long, until somebody pointed out that the size was wrong. So I changed it to use int, and when I tested it then, it was 32 bit. |
![]() Registered Member ![]()
|
|
![]() Registered Member ![]()
|
No -- you have what is sometimes known as the "all the world's an x86" view. See http://www.parashift.com/c++-faq-lite/n ... l#faq-29.5 for a quick reference, and see section 3.9.1 of the C++98 standard for the guarantees that *are* given. In summary, sizeof(signed char) <= sizeof(short int) <= sizeof(int) <= sizeof(long), sizeof(char) is at least one byte (where a byte is "big enough to represent the machine's native character set", whatever THAT means), and sizeof(char) == 1 (in other words, sizeof quotes sizes in multiples of sizeof(char)). int has "the natural size suggested by the execution environment", which may very well be 6 bytes (I have personally seen this to be the case on an Opteron running RHEL 4), 8 bytes, 2 bytes, or of course 4 for a reasonable implementation on x86. I can find it in my copy of Stroustrup as well when I get home if C++ FAQ Lite is not convincing and you don't have access to a copy of the standard. All the sizes depend on whatever the compiler implementer feels like, subject to those guarantees. After looking at the assembly generated for SHA1HashGen::processChunk, it seems as though gcc can figure out from the ANDs with 0xFFFFFFFF that lots of those quantities ought to be 32-bit, which is probably nice, except the SHA-1 algorithm doesn't care if the intermediate values are actually larger than 32-bit (and represented modulo some multiple of 2^32) as long as they are taken mod 2^32 by the end of the algorithm, but I'll leave further elaboration on SHA-1 to another thread. |
![]() Registered Member ![]()
|
|
![]() Moderator ![]()
|
Registered users: Bing [Bot], claydoh, Google [Bot], rblackwell, Yahoo [Bot]