A Messy Transition: Practical Problems With 32bit Addressing In Windows
by Ryan Smith on July 12, 2007 12:00 PM EST- Posted in
- Software
As it turns out, it's possible and actually quite easy to move the 2GB barrier by increasing the size of the user space, but at the cost of reducing the size of the kernel space. Under Windows XP, this is the fabled "/3gb" switch for boot.ini, and for Windows Vista it's the "IncreaseUserVa" option in BCDedit. By using these options applications can use more than 2GB of virtual address space (generally up to 3GB), and ideally this would be the end of the article.
Unfortunately this is not the case as there are problems on both the application and kernel side of things. On the application side, a common poor programming practice has been to always assume that an application will only be dealing with 2GB of user space; code that makes this assumption will likely error if more than 2GB of user space is actually available. This is avoidable by following proper programming practices, but as a safety precaution even with additional virtual address space allocated to user space Windows still defaults to limiting an application to 2GB. Only finally, if an application indicates to Windows that it is capable of handling more than 2GB, via the "/LARGEADDRESSAWARE" flag, may it have access to any space above 2GB.
As for the kernel, having had up to half of its space taken away must now find a way to live in a smaller space. The (in)ability of any specific system/Windows configuration to deal with this is why the 3gb switch is considered dangerous, seldom recommended, and just generally a bad idea. The biggest culprit here is drivers that run in kernel space. Like applications, they may assume that there's an entire 2GB of address space to work with, except unlike applications this space gets smaller instead of bigger.
Windows' own memory needs can also cause problems with the reduced kernel space. As we mentioned before, space is required for the kernel to do a multitude of things, if a lot of space is required - video cards with a lot of memory are a particular offender here - then everything needing space may not fit in the kernel space. Because there are no strong safeguards against these conditions it may cause a failure to boot or system instability, especially if the culprit is a driver that is well enough behaved to boot. Many modern drivers from hardware vendors that deal with enterprise-level hardware are capable of handling this, many more consumer hardware drivers are not. Stability concerns are the number one reason that breaking the 2GB barrier on a 32bit version of Windows is not recommended.
There is also a second concern however: performance. While an individual application may benefit from more user space in which to work, the kernel now has less space to cache data (as non-obvious as this may seem given all the addresses are virtual) and this can in theory hurt performance. This condition is something we will take a look at in a bit.
69 Comments
View All Comments
BabyBear - Thursday, July 12, 2007 - link
This also happens with Flight Simulator X by the way.
Over on Phil Taylor's blog he makes mention of it awhile back
http://blogs.msdn.com/ptaylor/archive/2007/06/15/f...">Microsoft's Phil Taylors Blog
There was also some talk that the June 07 DirectX Redist. seemed to 'help' with Out of Memory problems when using the /3gb switch.
joetron2030 - Thursday, July 12, 2007 - link
On "Page 4: A Case Study: Supreme Commander", in the paragraph discussing the first screen shot from Sysinternals' Process Explorer, you refer to the "Virtual Size" column as the "Private Size" column (which doesn't exist in the screen shot).Had me briefly confused.
quanta - Thursday, July 12, 2007 - link
Although Windows 2000/2003/Vista server versions aren't exactly designed for gaming, did anyone tested game titles on these server OS to see if these large address aware titles will use the Physical Address Extension feature?Ryan Smith - Thursday, July 12, 2007 - link
The short answer is no.The longer answer is that due to a combination of chipset support, software support, performance, and driver support, it's not really usable outside of a server environment and shouldn't be used on a consumer system.
brink - Thursday, July 12, 2007 - link
Wouldn't matter, WinXP SP2 uses PAE, why would you want to install it on a server OS? Only 2003 is a server OS of the ones you mentioned, and I think only Windows 2003 Enterprise is the only 32-bit OS that has the ability to use more than 4GB of memory (8GB seems to be the limit for what modules/mobos are available right now)TA152H - Thursday, July 12, 2007 - link
I'm just reading the remarks about the 386 pushing the memory from 1 MB, to 4 GB. It's patently untrue, unless you think the 386 succeeded the 8086, and not the 286. The 286 had 24 bit addressing (in 64K segments) for 16 MB of memory. This is not only what OS/2 used, but extended memory as well. So, it's not academic.Ryan Smith - Thursday, July 12, 2007 - link
Just hearing the words "segmented memory" gives me flashbacks - and they're not the good kind. For this article we're talking strictly about flat addressing since I could write a small book on just segmented addressing, but I've updated the article to make this clear.TA152H - Thursday, July 12, 2007 - link
Either way, you're using the wrong values. Even on the 8086 the memory was segmented. Actually, so was the 386, but it could handle segments up to 4 GB, so it wasn't important. So, using 1 MB and saying you're not referring to segmentation is inaccurate; that is segmented memory.Segmentation was not a bad thing, particularly back then. It saved memory space, and back in 1978 that was a big thing. Addresses were given in 16 bits, you didn't have to specify the rest, and the net result was you had shorter code. You would have to change the registers to point to the next segment from time to time, but overall it saved a lot of memory. You also could protect apps from each other, but that wasn't really used.
I was an OS/2 developer, and it was a pain sometimes for sure. By the time the 286 came out, which I think was the best processor Intel ever made considering the timing, and the incredible capabilities it had over it's predecessor (the 8086, not the 80186 which was made in parallel with the 286), the extra memory saving was not worth the nuisance of having to deal with, at most, 64K segments. But it really wasn't possible for Intel to do it any other way, as I'll explain below.
Motorola even added some external memory management unit that added segmentation, which today seems strange. It's widely viewed now as just a bad thing, but back then, it wasn't. Motorola really had a choice though, since the 68K was part 32-bit, and part 16-bit, and part 24-bit (addressing). Although the 286 was a more powerful processor than the 68000, it was pure 16-bit, except for the oddity of the 24-bit addressing. Consequently, allowing flat 24-bit addressing would not have been feasible. So, they dealt with it by just adding more segments. Considering the absolutely incredible improvement in this processor in just about every way, it was not such a bad tradeoff.
BitJunkie - Friday, July 13, 2007 - link
Hiya Bill.jay401 - Thursday, July 12, 2007 - link
Just FYI, SupCom has already exhibited this problem and crashes once it breaks the 2GB barrier, which can happen easily in longer games with a high unit limit.This is mostly due to the poor efficiency in their code and models, something GPG (the developer, Gas Powered Games) reported could not easily be fixed in a patch because they basically have to re-render every unit in the game so they take up less memory. Due to this, it's likely to remain unfixed until the expansion pack (now a standalone game) Forged Alliance comes out in November, if it is fixed at all
One forum member developed a way to increase the addressable memory to around 3GB on 32bit WinXP and Vista, so if you're running 4GB, this provides a sort of band-aid solution in the meantime.