Unexpected results when playing against the pc my "Schafkopf" game!
-
In the past I did not make notes how many games the pc won or not. Just tried to fix all known issues... My thoughts were that the programmed rules (if statements and queries) let the pc play normally good but in some special cases it makes mistakes like a beginner. Today I added 3 or 4 code lines to track it - and to my big surprise the result is that I can be happy to win 50 ... 55 % of the games (as winning solo player or as member of the winning team).
-
In the past I did not make notes how many games the pc won or not. Just tried to fix all known issues... My thoughts were that the programmed rules (if statements and queries) let the pc play normally good but in some special cases it makes mistakes like a beginner. Today I added 3 or 4 code lines to track it - and to my big surprise the result is that I can be happy to win 50 ... 55 % of the games (as winning solo player or as member of the winning team).
I have written a couple of games wherein the computer may play. One of them has two strategies -- a "safe strategy" and an "aggressive strategy" -- I have notice that the "aggressive strategy" wins more frequently than the "safe strategy". In the other, like Data playing Strategema, the computer will play for a draw in some cases.
-
In the past I did not make notes how many games the pc won or not. Just tried to fix all known issues... My thoughts were that the programmed rules (if statements and queries) let the pc play normally good but in some special cases it makes mistakes like a beginner. Today I added 3 or 4 code lines to track it - and to my big surprise the result is that I can be happy to win 50 ... 55 % of the games (as winning solo player or as member of the winning team).
Back in the days of FidoNet when dinosaurs roamed the earth, I wrote a dice game for BBS's called Greedy* -- my friends and I called the real game either Greedy or 9-Dice -- in which a player would play against the computer opponent. I programmed it to play an average player's game in it's decision tree, and the random dice throws were calculated by the same routine for both the computer and the human. Quite often in a game, whether with real dice or in my app, you might have to decide if you wanted to risk throwing the remaining dice and lose any points you had racked up that round or take the points and pass the dice. Kind of like Blackjack where you're sitting on 15 or 16 but the dealer is showing an ace or face card. Do you draw or call? Anyway, that decision for the computer was also randomized, and yet I was always surprised how often the computer would be way behind, the player is close to winning (which did increase the weight of that decision toward risking it), and when it decided to go with the risk it rolls enough points to come from behind and win. I did not program it that way, though I had a few ask people me if I did. :laugh: * almost identical to a game called Farkl that you can play on Facebook--I did not write that one
There are no solutions, only trade-offs.
- Thomas SowellA day can really slip by when you're deliberately avoiding what you're supposed to do.
- Calvin (Bill Watterson, Calvin & Hobbes) -
In the past I did not make notes how many games the pc won or not. Just tried to fix all known issues... My thoughts were that the programmed rules (if statements and queries) let the pc play normally good but in some special cases it makes mistakes like a beginner. Today I added 3 or 4 code lines to track it - and to my big surprise the result is that I can be happy to win 50 ... 55 % of the games (as winning solo player or as member of the winning team).
Have you set it up so you can have the computer play itself with replay of any randomized values? That is a good way to see how tweaks in your rules will change the outcomes. Keep each rule package as a separate, versioned component. You can then randomize or choose which rule package is active.
-
Have you set it up so you can have the computer play itself with replay of any randomized values? That is a good way to see how tweaks in your rules will change the outcomes. Keep each rule package as a separate, versioned component. You can then randomize or choose which rule package is active.