Jump to content

Mercury

Community Members
  • Posts

    25
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Mercury's Achievements

Discens

Discens (2/14)

7

Reputation

  1. Some further thoughts: There is no possibility of deadlock if only one thread at a time acquires a lock while holding another. A global JS mutex can designate that thread. When using mutex the primary danger is deadlock: when one thread holds mutex A while attempting to lock mutex B while another holds B when attempting to lock A. JS needs a global mutex and js code will need to lock arbitrary compiled-side data. So we can't prevent the thread which holds the JS mutex from locking additional mutex concurrently, creating half of a deadlock. What we can do is not provide the other half. The compiled code could be modified so all shared data is protected by mutex with the rule that nested locks are prohibited. If in some parts of the code it is not possible to do one mutex at a time without exposing an object with incomplete state we can use std::lock to lock multiple mutex atomicly. The resulting code will I think not be too complicated and the rule of no nested mutex is relatively easy to verify. As long as only the thread which currently holds the JS mutex is allowed to do nested mutex locks there is no possibility of deadlock. EDIT: I forgot to mention, this plan also requires a message queue, to reduce nested mutex instances which will need to be addressed to direct access via CmpPtr. All Post and Broadcast can be made safe automatically.
  2. I'm working on dynamic event subscription and have some questions about how events currently work. Why is CComponentManager::DynamicSubscriptionNonsync only useable for nonsync? Why are messages dispatched in order of entityId? Thank you.
  3. We wouldn't need to switch to js just to check for existence of a file. Also the result can be cached. Regarding the initial point of the thread though: today I'm thinking mutex is overkill and we can use std::atomic. Just need to figure out what all data needs to be marked atomic.
  4. In an extreme case we could have C++ which could optionally be replaced by js. Probably not needed. I hope. The purpose of a mutex is to prevent memory collisions: when threads try to write or read memory in the middle of another thread writing the memory. Regardless of the details of where threads split and join the mutex is needed, even just to prevent the main thread reading data while the simulation thread is writing it.
  5. Thanks. Looks like maybe 2% of users using two core / two thread celeron processors. Still may work better for them to separate simulation from graphics since os doesn't take that much time* but still not great. Is there a way we can test on the very low end? The mutex belongs to the data being accessed, not to the code which is accessing it. *Particularly when using allocators. EDIT: just checked and found my laptops a9-9425 is also 2 core / 2 thread, guess I can test this. Also about 1% of users are on 2 thread AMD machines.
  6. I'm not sure I understand, each access of a component (or other data we need to protect?) would require acquiring either a read lock or a read write lock. Where the access is coming from shouldn't matter. Dynamic message subscription is on my todo list. Added static message types to the list as well, thanks. Anyway, regardless of the (very much non-trivial) difficulties of multitheading simulation it's self, just separating it from the main thread seems like it would give a very large amount of simulation performance when keeping graphics smooth under load. 20fps * 16ms/frame = 360ms/second: A 56% increases in simulation time budget! At least for any machine that has two or more cores (4+ threads). And that is just considering simulation lag. If we consider animation lag then each simulation turn only gets ~34ms to run in (at 20 fps). In a dedicated thread this is not an issue at all: frame rate remains constant despite simulation lag. Do we have any data on what fraction of the users have a single core machine?
  7. The first phase would just be to separate simulation from graphics. This alone is worth the trouble. I don't understand what you mean regarding mutexes in an event based system. What sorts of problems do you have in mind? Regarding the more ambitious project of multithreading the simulation it's self the javascript is one issue to deal with, but i think not insurmountable. Some code which is currently in JS might have to be rewritten in C++, we would have to consider other engine users as well of course. The threading model I'm thinking of for the second phase is to split into multiple threads during certain expensive tasks and then continue as a single thread soon after, so in some cases JS isn't involved at all. One option is to generate a priority queue sorted in some deterministic order (entityID maybe) multithreaded in c++ and then pass to JS. It's not an easy problem.
  8. I'm thinking about a strategy to add more threading. Components would be protected from memory collisions by mutex*. Is there any other data that needs mutex protection? Any data which is both written to and read from during a game? *incrimenting read/readwrite, at the ComponentType level of granularity
  9. I'm trying to write a patch to reduce network traffic by preventing a users commands from being echoed back to them by the server. Is GUID the correct property to look at? I need to compare a CNetMessage against a CNetServerSession at network/NetServer.cpp line 402.
  10. Ah I see now. I was under the impression that graphics and simulation ran in separate threads but have since learned otherwise. We should revisit this after that issue is resolved. Doing some expensive tasks only on every other turn is good but running them on turns where we have extra time to work with is better Repathing is probably the hard limit on what we can do here before things look off. There is some simulation overhead checking those timers and running those queries. I don't know how much either. Also some network overhead.
  11. Regarding lag being increased due to work piling up, my recent patch to the GC scheduling: https://code.wildfiregames.com/D4758 should change the performance profile considerably by feeding all extra time in the simulation thread ( and sometimes more then that) into GC slice.
  12. After being informed how things work, I'll revise my claims to a modest performance boost, maybe around 10%, and a reduction in network traffic around 10-18%, depending on player APM. Those things maybe worth 50ms in queued command lag / single player input lag. 500ms sounds like too much. The difference between 2 turns per second and 5 turns per second (300ms) is much more then between 5 and 4 (50ms).
  13. Currently the default simulation turn length (DEFAULT_TURN_LENGTH) is 200ms. A higher number would reduce the number of turns per second and thus our total simulation load. For example setting 250ms would reduce simulation cost by 20%. The disadvantages I see are increased lag on input and between a unit finishing one task and beginning another. The input lag can be counteracted in multiplayer by reducing the number of turns commands are delayed(COMMAND_DELAY_MP). For example if COMMAND_DELAY_MP is reduced from 4 turns to 3 turns it would balance out increasing DEFAULT_TURN_LENGTH to 250ms. Single player COMMAND_DELAY is 1 so it can't be reduced. Would just have to accept the up to 50ms extra input lag here. I played a quick single player game and found both input lag and lag between queued tasks unnoticeable. This patch: https://code.wildfiregames.com/D4760 Implements these changes. Try it and see if it feels weird?
  14. Currently everyone sends their commands to the server who then collates them and sends them out to everyone. Peer to peer would skip the server step and thus reduce latency by half and traffic by more then half. As an added bonus, the game would not end when the host disconnects.
×
×
  • Create New...