You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, score is primarily kept within a trial, and for a few things (like remaining time) across a session. It would be useful to be able to keep an experiment-designer-defined score both running across trials within a session as well as across sessions within an experiment.
This defined score would then be reportable to the user feedback messages as well as to the database. If a shared leaderboard is ever implemented, then this score would also be reported to that leaderboard, though that functionality is purely theoretical at this point.
Open topics for discussion before beginning implementation:
How to express what values contribute to "score"
What values can contribute toward score (hits, misses, time taking, time remaining, etc.)
How the score should be reported to the user
Any potential issues with logging scores to the database? Does the score expression need to be logged? Do the sources consumed to create the score get logged?
The text was updated successfully, but these errors were encountered:
This is an interesting and useful feature idea, but the bounds of what could be done here are quite broad. Options I would consider include:
User specified, string-based "score expressions" that might include math done between one or more "base metrics" we provide. These could become arbitrarily complex and require a nearly complete math library to be defined. For example score = "max(1/taskTimeS, 100) * shotsHit / (shotsHit + shotsMissed)". Alternatively this could be simplified to only allow particular, predefined "base metrics" for score
Working from @jspjutNV's initial list I'd say a good set of base metrics (for trials or sessions) would include:
hits
misses
targets destroyed
damage done
total shots
accuracy (as ratio or percentage)
task time
time remaining
time spent firing (as a total or, more likely, a ratio)
trial successes (session only)
trial failures (session only)
I believe score should be reported to the user in 1 of 2 core ways:
Via the in-game banner (where different field visibility should probably be controllable, i.e. score but not time)
Through formatted feedback messages
I agree that w/ more configurability logging score to the database isn't a bad idea (you could support logging the score expression through the sessParamsToLog array); however, you should certainly be able to reproduce any score metric from the results written to the database (this might be a good test of what our db is missing "easy access" to).
Right now, score is primarily kept within a trial, and for a few things (like remaining time) across a session. It would be useful to be able to keep an experiment-designer-defined score both running across trials within a session as well as across sessions within an experiment.
This defined score would then be reportable to the user feedback messages as well as to the database. If a shared leaderboard is ever implemented, then this score would also be reported to that leaderboard, though that functionality is purely theoretical at this point.
Open topics for discussion before beginning implementation:
The text was updated successfully, but these errors were encountered: