2048 (3x3, 4x4, 5x5) AI
OS :
Version :5.3
Size :132.96Mb
Updated :Nov 28,2022
Developer :Huan Lin
Ask AI
You can ask the AI some questions about the game
Here are three topics and corresponding questions for you: 1. Which game mode do you think is the most challenging in 2048, 3x3, 4x4, 5x5 AI, and why? 2. How do you usually handle the combination of slides and merge AI to achieve high scores in this game? 3. Do you prefer the AI-powered difficulty adjustment in this game, or do you prefer a fixed difficulty level, and why?
Q{{(index+1)}}{{item}}Ask AI
Pros and Cons from users' feedback
{{conclusionsAiError}}Retry request
Game Downloads
IOS
Game Survey
  • Can this game archive?
      Submission Failed, try again
  • How many points are you willing to score for this game?
      Submission Failed, try again
  • Have you played similar games?
      Submission Failed, try again
  • Can you make money playing this game?
      Submission Failed, try again
Description
Classic 2048 puzzle game redefined by AI. Our 2048 is one of its own kind in the market. We leverage multiple algorithms to create an AI for the classic 2048 puzzle game. * Redefined by AI * We created an AI that takes advantage of multiple state-of-the-art algorithms, including Monte Carlo Tree Search (MCTS) [a], Expectimax [b], Iterative Deepening Depth-First Search (IDDFS) [c] and Reinforcement Learning [d]. (a) Monte Carlo Tree Search (MCTS) is a heuristic search algorithm introduced in 2006 for computer Go, and has been used in other games like chess, and of course this 2048 game. Monte Carlo Tree Search Algorithm chooses the best possible move from the current state of game's tree (similar to IDDFS). (b) Expectimax search is a variation of the minimax algorithm, with addition of "chance" nodes in the search tree. This technique is commonly used in games with undeterministic behavior, such as Minesweeper (random mine location), Pacman (random ghost move) and this 2048 game (random tile spawn position and its number value). (c)Iterative Deepening depth-first search (IDDFS) is a search strategy in which a depth-limited version of DFS is run repeatedly with increasing depth limits. IDDFS is optimal like breadth-first search (BFS), but uses much less memory. This 2048 AI implementation assigns various heuristic scores (or penalties) on multiple features (e.g. empty cell count) to compute the optimal next move. (d) Reinforcement learning is the training of ML models to yield an action (or decision) in an environment in order to maximize cumulative reward. This 2048 RL implementation has no hard-coded intelligence (i.e. no heuristic score based on human understanding of the game). There is no knowledge about what makes a good move, and the AI agent "figures it out" on its own as we train the model. References: [a] https://www.aaai.org/Papers/AIIDE/2008/AIIDE08-036.pdf [b] http://www.jveness.info/publications/thesis.pdf [c] https://cse.sc.edu/~MGV/csce580sp15/gradPres/korf_IDAStar_1985.pdf [d] http://rail.eecs.berkeley.edu/deeprlcourse/static/slides/lec-8.pdf
{{descriptionMoreText}}
Comments (0)
{{commentText.length}}/{{maxCommentText}}
{{commentError}}{{commentUserError}}
Failed to load data, try again
  • {{comment.commentUser.substring(0, 1)}}
    By {{comment.commentUser}}{{comment.commentDateString}}
    {{comment.comment}}