Home | Rules and Guide | Sign In/Create Account | Write a Post | Reddit | #LD48 | #ludumdare on irc.afternet.org (Info)

Ludum Dare 29 — April 25th-28th Weekend [9 PM EST] — Theme: ??? (Theme Voting!)
  • Ludum Dare 29 Compo (48 Hour+Solo+Scratch+Src) Begins: in 2 days, 0 hours, 12 minutes, 46 seconds
  • Ludum Dare 29 Jam (72 Hour+Teams OK+Relaxed) Begins: in 2 days, 0 hours, 12 minutes, 47 seconds
  • [ MiniLD 50 | Warmup Weekend | Real World Gatherings | Ludum Deals | Wallpaper ]


    The timer problem

    Posted by
    April 28th, 2012 11:56 pm

    Hi,

    cause this problem was a bug in my LD game and will be a bug in other games I want to present it and possible solutions for it. I would like to know what you think about these or have another solution.

    The  bug

     Not all computers are as fast as other computers. That’s why a game without any timing limiter would execute on other computers faster or slower.

    Possible solutions

    There are some possible solutions I already know:

    • Limit the frame rate to a constant value (all procedural movement can be done with a constant value)
    • Compute the elapsed time between two frames (the elapsed time must be multiplied with all speed values)

    I want to show you my opinion about these.

    Limit the frame rate to a constant value

    It looks very easy you calculate the time span of the current frame and subtract it from the time span a full frame for the desired frame rate and you have the time you must wait before a new frame may start. But what is when you notice that your game is suddenly slow? The problem in the code can be thousand code lines before the code you were actually working on, but you don’t saw it because you can’t see when the frame takes more time. The frame rate, if you don’t have a very bad computer, stays at his value.

    Compute the elapsed time between two frames

    That is what I did all the time and it worked. But at that time I used the graphic functions of the SDL, but I switched to OpenGL because SDL was too slow (especially for large images). As I used SDL the max frame rate was about 500 and this was good enough for me. As I started Mr.No’s Laboratory I got 10.000 FPS and the timer function was broken. The reason is that I use milliseconds for computing the elapsed time. But at 10.000 FPS a frame only take 0.1 seconds but the function for getting the milliseconds returns an unsigned int (so there is no comma). The timing function works only with frame rates up to 1.000 frames per second (elapsed time=1)

    For ludum dare I already knew  this problem but I thought I solved it with computing the elapsed time every 50 frames (max frame rate=50.000). But I doesn’t work. Multiple people reported that the player is moving very fast. For a postmortem version I activate VSync (so the frame rate is about 60-75), but I want to keep my high frame rates. But what to do? I already know two solutions:

    • using nanoseconds instead of milliseconds
    • determine a proportion between the frames to skip (for the elapsed time calculation) and a performance of the computer (like cycles in one millisecond)

    using nanoseconds

    There is a easy-to-use function for java but for c++ I only know the functions for windows and even if I don’t port my applications to Linux or Mac I want to keep the possibility for it.+

    determine a proportion

    Using a code like: while (SDL_GetTicks()-ticksAtStart<100) count++; you get a performance value of your computer. Maybe there is a proportion between this value and the number of frames I must skip for a elapsed time value which I can use to do procedural movements.

     

    But whats your opinion about all these problems?

    Helco

     

     

    8 Responses to “The timer problem”

    1. Jedi says:

      I don’t understand what the problem is you’re having with computing the elapsed time in between frames, even if you have to use integers of milliseconds. You don’t have to use ints in your speed calculations, and you can use milliseconds directly by just dividing your characters’ speed / second by 1000. I.e. If you have a character that moves 1 unit every second, he moves 0.001 units every millisecond.

      Some code for a character might look like this:
      float Speed = 0.001f; // move character one unit to the right every second
      Vector3 Position;

      void FrameUpdate(int millisecondsPassedSinceLastFrame)
      {
      Position.x += Speed * millisecondsPassedSinceLastFrame;
      }

      Does this help?

      • Helco says:

        Yeah this is the function I used, but this works only with frame rates smaller than 1001 FPS. With 1001 FPS your millisecondsPassedSinceLastFrame is 0 so the position doesn’t change.

        • zanders3 says:

          You eye/screen can’t really see/refresh beyond 60 fps so why not just limit the framerate to 60 to prevent this problem? :)

          Alternatively you could do this to stop it hitting zero:
          millisecondsPassedPerFrame = Math.Min(millisecondsPassedPerFrame, Float.MinimumValue);

        • Jedi says:

          Oooooooh, I get it now. That’s funny :). I was confused because I’m used to seeing comma’s and dot’s switched from you in numbers :). I read 10.000 as 10.

          Hmmm. I think if you’re getting buggy movement, it’s probably not caused by the high FPS, but a subtle bug that is putting some framerate dependencies (I’ve put in quite a few in my day).

          At 3000 FPS, you should have two frames that update with an elapsed time of 0 ms followed by one frame that updates time 1 ms. It won’t matter, because no display will update that fast, and if your movement code is working properly, the updates at 0 ms shouldn’t move the character and the updates at 1 ms should be moving the right amount, on average.

          Now, If at 3000 FPS, you’re function is called three times and each one has an elapsed time of 0, that’s something different. However, I think that’s unlikely since at the end of the day, the function doing the calling is probably looking at a ms counter somewhere.

          I suspect that your update function actually moves the character if it is called with an update of 0. You might try adding an if( elapsed == 0 ) return; to the start of your update function and see if the behavior changes any (it shouldn’t).

          Also, one thing I like to do to test framerate dependency issues is to open a lot of instances of the game until the framerate drops way low – then see if things still move at the right speed.

    2. Just FYI, there are more ways than 2 to deal with framerates. Important things to consider are vsync, determinism, and how you want your game to react to jitter/slow computers.

      One of the more elegant and general ways to do it is by using a fixed delta timestep. Read the following for more info:
      http://gafferongames.com/game-physics/fix-your-timestep/

      This decouples your logic/game tick rate from your graphics rate, while at the same time making each tick the same length, which makes sure everything is deterministic and you don’t run into floating-point precision issues. If you want to go above and beyond you can do what the article suggests in the end which is interpolate between the current frame and the last frame based on your time accumulator.

      It all depends on what you want to do. For example, if you want your game to respond to slow computers/hiccups in the system by actually slowing down, then ideally you want to use vsync and couple your game tick rate to your draw framerate so that you update logic/state exactly once between every frame drawn.

    3. yoklov says:

      Well, just for the record, java’s nanosecond counter is a lie. If it’s not exposed in C++, then java doesn’t get it.

      But really, think about it like this, lets say you have a beastly 4ghz processor (cores don’t matter in this case, just cycles), that means it runs one cycle every 0.25 nanoseconds.

      Lets say that to update the nanosecond counter, it takes a single operation (it takes more, but we’ll be generous).

      If java’s nanosecond counter were accurate, your computer would spend 25% of its time keeping time. Which would be absurd. The best they can do is guess.

    4. Shadow says:

      If you ask me for my opinion I don’t think that having such exaggerated FPS does help your game in any way. For starters is creating a problem for you.

      A game running at more than 100 fps is visually absurd. After 60 fps there’s literally no difference. Besides, most screens refresh at 60 or 75Hz.

      Unless you want to overheat the player’s gfx card unnecessarily I would recommend you to run your games at a decent constant limit. I’ve never required anything to run beyond 60fps. That makes the game gfx-card friendly since I won’t be abusing of the card just for the sake of a counter.

      Be warned that you may still need to compute the difference in time between frames because it’s almost impossible to achieve a constant framerate with milliseconds accuracy, so I usually set a fixed framerate and still compute the difference to account for little inaccuracies.

      I also set a limit to the time elapsed to avoid problems with computer hick-ups (let’s say the computer froze for 300 milliseconds in the middle of an accelerated movement, I don’t want the physics propelling the character to the moon, or burying him underground due to instant gravity boost).

    5. Tourgen says:

      If the user’s gfx hw and driver supports vsync then use it. That’s the hardest part of the problem solved right there!

      Otherwise you could try throttling the graphics update using sleep(). It’s tricky because most systems do not support sleep times of under 5ms. Others won’t sleep for less than 20ms (which is pretty useless for framerate throttling). 60fps is a frame every 0.0167 seconds, or ~16.7ms. Busy loops checking the time seems pretty pointless .. but it may be unavoidable on systems that don’t support vsync. (damit ATI you’re wrecking games). With vsync off I can reliably throttle to 50fps. Not great but serviceable considering. It’s a good idea to average a chunk of frames up front and make sure things are working.

      I interpolate game logic by the delta-time between updates. If you can’t guarantee a vsync clock signal to work from in all cases then you have to be able to deal with a varying update rate. Use the high-precision timer in windows (milliseconds accuracy), via whatever means you have of calling it. It’s a feature of all modern CPUs I believe so use whatever means your OS has to allow access to it. In windows it’s in winmm.dll I believe. I use GLFW which has a nice function called glfwGetTime(), which returns the system time in a double with units of seconds – works across OSes so I don’t have to ever worry about it.

    Leave a Reply

    You must be logged in to post a comment.


    All posts, images, and comments are owned by their creators.

    [cache: storing page]