17

I know there have been computer games which rely on a fixed CPU frequency. Instead of a clock function they rely on the fact that the CPU needs some time to execute the code.

But why did they do it? Aren't there drawbacks only?

For example if the game is made to run on a 300MHz CPU. If the CPU has 250MHz or 350MHz the game will run 17% slower or faster. Also background tasks running on the PC may be a problem (making the game run slower).

So why did anyone want to do this instead of using a clock function (which is much more accurate)? I'm sure there is an answer to my question.

10
  • 35
    By the time of 300MHz CPUs they did not do this. They only did this when all PCs of the same rough type had the same frequency. Commented May 4, 2023 at 20:16
  • 20
    Home computers were as cheap as possible. Many had no independent hardware clock at all. Others had hardware clocks without sufficient precision to tell fine-grained time independent of the CPU frequency. Others still had hardware clocks that were really counters or dividers slaved to the CPU frequency, so not independent. Commented May 4, 2023 at 20:26
  • 17
    @Justme And the "Turbo" button on some 8 Mhz. clones wasn't so much to make the system run fast (you would almost always want as fast as possible except for certain games) but to slow the system to 4.77 Mhz for games that needed it. Commented May 4, 2023 at 21:06
  • 3
    @manassehkatz-Moving2Codidact or the unTurbo button as it was sometimes known -for the reason you give
    – Chris H
    Commented May 5, 2023 at 9:21
  • 5
    Games were (like today) often times a product to be consumed the year they came out. As a developer in the 80s (and sadly also today) try convince your boss that you need to work on this highly technical feature that may allow people to run the game in 5 years too. Why? By then Part 2 will be out if it is successful so why play Part 1 still? Commented May 5, 2023 at 14:51

8 Answers 8

58

Every game I've seen that assumed a fixed CPU frequency in some sense came from one of two eras:

First, games designed for the original IBM PC and its clones (what Turbo buttons were invented to preserve compatibility with) were designed in an era when CPU speed was effectively part of the platform's ABI. Like with prior platforms like the Commodore 64 and later consoles like the NES, SNES, Sega Genesis/Mega Drive, etc., the IBM PC was just assumed to be fixed-spec hardware.

Second, games like the original Unreal and Unreal Tournament (which will work on varying hardware but require you to lock your modern CPU to a fixed clock speed before launching them) were written in an era when x86 CPUs which could vary their clock speed dynamically during execution didn't exist, so measuring the CPU's speed at startup and assuming it would stay constant to avoid the overhead of making system calls to check a timer was perfectly reasonable to the point where not doing that would be just one of a wide swath of "Let's try to hedge against someone inventing a Star Trek tricorder or replicator in two years and using it to cheat" shots in the dark.

In both cases, the answer boils down to "Because that was a standard assumption at the time, based on experience with the platforms of the time, and which only seems flawed with the benefit of hindsight".

12
  • 24
    This plus the very relevant cost of potential clock calls both in programming complexity as well as performance. IMHO, even in hindsight it was the right call because you just didn't have the spare computing power available in a lot of cases
    – Hobbamok
    Commented May 5, 2023 at 8:46
  • 1
    I wonder if anyone has come up with a patcher for programs written in Turbo Pascal to modify the Delay function so it's a bit more accurate on modern PCs?
    – supercat
    Commented May 5, 2023 at 18:38
  • 10
    @UpAndAdam This doesn't say that UT/Unreal was locked to a specific frequency, e.g. 300mHz on every computer, only that it required the CPU to be at a static frequency during the runtime of the program--if you start at 783mHz you should continue at 783mHz the entire time the game is running. It knows that the frequency can be different between different computers but it assumes that the frequency will not change while the application is actually running. Commented May 5, 2023 at 19:50
  • 1
    fair point. i misread that detail and missed the nuance. i hadn't even thought about the issue of games running in non constant frequencies nowadays. you have my up vote fine sir
    – UpAndAdam
    Commented May 5, 2023 at 20:37
  • 2
    @supercat yes, 2008 some time I think. "search turbo pascal divide by zero bug"
    – Jasen
    Commented May 7, 2023 at 11:09
34

A typical clock function in the 1980s was not very high resolution. MS-DOS function 2CH apparently reported 100ths of a second, but was actually based on the 18.2 Hz timer interrupt, so it wasn't actually that high resolution. There was also overhead making a system call to find out the time, and every cycle counts at 4.77 Mhz.

Typical systems used interrupts instead. And those were typically:

  • MS-DOS - 18.2 per second. See this post for some of the details and this one for some more.
  • 60 (US) or 50 (Europe) per second. This was used in a lot of systems, including MP/M-86 (which I have first-hand knowledge and double-checked) and I believe plenty of other variants of the Digital Research operating systems, and other systems as well. This rate matches the local mains electricity AC frequency. Even though the computers all run on DC power, this was a very common timing to use.

It was only years later, as CPU speeds increased, that either higher frequency interrupts or a system call to read a higher-resolution time value became common.

8
  • 4
    The DOS "get time" function just uses the time tick counter. It provides a centisecond value, but this value increases in increments of 5 or 6 (due to the 55ms resolution). Commented May 4, 2023 at 20:49
  • 1
    I understood it that way. My comment was meant to remove the doubt in that sentence, and make "it is based on the 18.2 timer interrupt" a confirmed fact. Commented May 4, 2023 at 21:11
  • 3
    Syncing to the vertical blanking interval (usually 50 or 60 Hz) seems reasonable for simulation step intervals, especially for systems with fairly fixed hardware. So the low precision isn't really a problem. I think it's more likely just about extra syncing costing speed. Commented May 5, 2023 at 5:49
  • 6
    I would like to add that games weren't expected to sell well for more than two or three months, so they'd use whatever pragmatic algorithms were easiest at the time. Commented May 5, 2023 at 6:37
  • 2
    DOS games could reprogram the timer interrupt for whatever value they felt like. They were not limited to the slowest possible value of 18.2 Hz.
    – Justme
    Commented May 6, 2023 at 21:19
24

The answers here are quite PC-centric, which is the wrong approach, as x86 PC were really late in the computer game industry. Before that, games were written and ported to specific machines. Machines that did not evolve in performance. A C64 or an Apple II ran at 1 MHz from the first to the last built. A Spectrum ran at 3.54 MHz. There was no point to synchronize to an external clock especially since most of the computers didn't even have a timer.

The code was guaranteed to run at a specific speed provided by the platform. The platform did not change. The same phenomenon existed with consoles, that had a fixed hardware during their lifetime, even if higher integration would change the hardware (PlayStation 2 exists in tens of different builds, but all have same performance).

4
  • 5
    On the stock Apple II there’s actually no way to tell tone beyond cycle counting. No timers, no interrupt sources whatsoever.
    – Tommy
    Commented May 5, 2023 at 13:19
  • 2
    @Tommy: That isn't quite true. On a stock Apple II in hires mode, if one avoids using some particular byte value anywhere on the screen, one can store that byte value into the eight bytes of one of the screen holes. Reading a floating-bus address will yield that byte value only when the raster is scanning that part of the screen.
    – supercat
    Commented May 5, 2023 at 17:09
  • Some hardware like C64, NES or Amiga came in different versions for different TV systems. So you could not expect a C64 to run programs intended for the other TV system.
    – Justme
    Commented May 5, 2023 at 17:45
  • 1
    This. PC's were not bought for gaming at that time (yet), but for business use and the faster one you could buy, the better. Games were just casual bystanders for this. Commented May 5, 2023 at 19:56
5

But why did they do it? Aren't there drawbacks only?

They did it because the systems were slow and they needed all the speed they could get. They could get away with it because every compatible system ran at the same speed. When that was no longer so this technique was dropped.

The drawbacks became apparent when you had a system with variable speed. Also some games ran at different speeds depending on what was happening, which was often disconcerting and could make them harder to play.

Another problem that affected some systems was hardware malfunctions when a peripheral was accessed without necessary delays. This could cause eg. screen corruption, garbled sound, or copy protection failure.

Games that needed smooth animation were generally synced to the frame rate where possible. However doing this can cause a massive slowdown if the rendering takes too long and it skips a frame. Alternatively a game might take fewer frames to render on a fast machine. If the code just waits for the end of the current frame it will run faster than it should.

For some games it didn't matter though, eg. turn based simulations and board games. For example in Microsoft Solitaire the ending animation runs as fast as possible, but that doesn't affect game play.

1
  • One flight simulator that I got when 286 machines were pretty new seemed to work that way, just run as fast as possible, and it was a fun game. I ran it a few years later on a 486 machine, and it was utterly unplayable running so fast. In a couple of seconds, you'd taken off, climbed, and crashed because you hadn't reacted properly when the jet started to roll... bummer!
    – Ralph J
    Commented May 8, 2023 at 1:02
3

I'm a bit too old to judge if even the first PC games should be considered retro or not, but...)

My guess would be that those who developed the first PC games, maybe had some background of older platforms and therefore (or otherwise) just wanted to take the most out of the HW, at the time when you could assume a certain CPU had a certain clock frequency

(An earlier reply was commenting about responses being PC-centric, but I guess that was what the originator of this chain wanted)

3

For example if the game is made to run on a 300MHz CPU. If the CPU has 250MHz or 350MHz the game will run 17% slower or faster. Also background tasks running on the PC may be a problem (making the game run slower).

On the other hand when they designed to run on a 4.77Mhz 8088, They wouldn't have that problem because that was the only kind of PC available at the time.

So why did anyone want to do this instead of using a clock function (which is much more accurate)?

I guess their crystal ball was broken.

I don't think anyone was expecting 40 years of backwards compatible hardware upgrades. Such a thing had never happened before.

1

One major consideration for game design is to make games deterministic so that one can rely on certain outcomes for certain input. Controlling a game by an external timer instead of synchronised to the game mechanics may result in "playthroughs" not working dependably.

2
  • 1
    I disagree. especially on PC where (pre USB) reading the joystick took a variable amount of time depending on joystick position.
    – Jasen
    Commented May 7, 2023 at 10:57
  • 1
    @Jasen That depends how you read it. You don't have to sit in a loop doing nothing until the variable timing is complete. Or if you do, you can always sit in the timing loop a fixed amount of time, regardless of joystick position. It should only take a millisecond or so anyway, but on the other hand, 1ms is 6% of the 60 Hz frame period.
    – Justme
    Commented May 7, 2023 at 12:42
1

Reason 1: It was the norm at the time

In the early to mid-80's it was the norm that a given platform, whether console or PC, that it was relatively static. The CPU was running the same speed on all devices and the video system and memory was often the same across machines of the same platform, barring some regional changes (eg. PAL vs NTSC).

For games, what would also usually occur, was that a game would often get completely ported to a new system or platform - sometimes resulting in a vastly different game between eg. a C64 and a NES.

While different platforms could have the same CPU (eg. a Z80), the underlying architecture of the rest of the system could also be vastly different, especially when looking at the video output system. In many cases, it resulted in simply being much easier to rewrite large parts of a game to adapt it to the new platform - while you were there, you adjusted the game loop timings accordingly.

Reason 2: Hardware

Most computers and consoles at the time did not contain other reliable timers than the CPU itself, meaning that, often, that the programmer simply did not have any other option.

At the same time, home computers and consoles were often rather simple, single-tasking setups, and resources were often very constrained.

It wasn't really until the PC platform started to include the 286 processor that a given hardware platform could include a multitude of different CPUs.

Yes, you could do rudimentary hardware detection to adjust the game speed and timings, but implementing such a system takes time and resources, and it wouldn't always be reliable. Often, the PC would have the (now classic) Turbo button as well.

The 286 did pose some additional problems as well (as well as the later x86s), as some instructions in the CPU itself were optimized, which could throw off the timing of game code all on its own. And old instruction that took 8 cycles of CPU time, would now take 2 or 4, for example.

Reason 3: Software

With the constrained hardware resources at the programmer's disposal, the most major problem was often not having enough resources to make use of high-level programming languages. At the same time, there was no room to run and make use of all the fancy APIs and libraries that games often make use of today. Programmers would have to interface and make use of all the hardware manually.

As a requirement, most games above a certain point (ie. all the fancy games with good graphics etc.) also need to focus on speed in order to handle all the things a game would need. Everything, such as the fancy graphics, sounds and input would all need to be handled in a timely fashion.

That all means that the programmer was often relegated to using pure Assembly or directly using machine code for their games. BASIC and other higher-level languages would all plain be too slow to produce satisfying results on these old platforms.

Reason 4: It was simply much easier Still today, when writing a game engine from scratch without fancy libraries and APIs, it's much easier to adapt directly to the hardware.

However, said libraries and APIs make it much easier to write games for a multitude of different hardware configurations. In a modern PC, you also have such things as a fairly exact RTC module that can help time operations down to fractions of a millisecond. Along with these features, the OS also holds your hand in different ways by taking care of hardware calls and handling concurrency/parallelization.

Summing it up:

All the modern, fancy stuff in the previous paragraph did not exist back then. Plain and simple. Combine that with the lack of variation on most platforms until around the mid-80's, you have the two answers to why many games were CPU-bound. It was often too difficult to implement hardware detection and code that could adapt to differing hardware. Concerning the platform landscape at the time, it was also often unnecessary to implement such a feature.

As for the PC market - even when the 286 was around, many developers would then simply program the game to fit the 286. Even then, the relatively minor differences in clock speed (most often between 10-16 MHz for most consumers) on the 286 would not pose significant differences in the speed of the game.

As a result, it wasn't really before the 386 era that time-bound game logic became best practice in the gaming industry. The clock and speed differences on the CPU (for the 386, between 12 and 40 MHz) now posed a larger issue for the speed of a CPU-bound game.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .