Playing multiplayer online terminal Pong with basic Linux utilities


Apple TV Apple How To How To How To

Agent Memory

Before I get into implementing agent memory, I want to explain what inspired researchers to use it in the first place.

When I first approached this project, I tried to train the agent using the data generated in real-time. So at timestep t, I would use the (S,A,R,S’) data from time (t-1) to train my agent.

The best way I can explain the problem with this is by going back to my early days in college. In my sophomore year, I had statics theory shoved down my throat for a year straight (I pursued mechanical engineering). Then in my junior year, I did nothing with statics and instead had machine design shoved down my throat. Come senior year, I’d forgotten everything there was to forget about statics.

In our environment, there might be 100 frames of the ball just moving to the left side of the screen. If we repeatedly train our agent on the same exact situation then eventually it will overfit to this situation and it won’t generalize the entire game, just like how I forgot about statics while studying machine design.

So, we store all our experiences in memory, then randomly sample from the whole list of memories. This way the agent learns from all his experiences while training, and not just his current situation (to beat this dead horse a little more, imagine if you couldn't learn from the past, and instead you could only learn from the exact present).

Alright now for implementation. For the memory, I make a separate class with 4 separate deques (a first in first out list of fixed size). The lists contain the frames, actions, rewards, and done flags (which tells us if this was a terminal state). I also add a function that allows us to add to these lists.

The odd naming of next_x in add_experience will become clear later


How To Play Pong On Terminal Details

The system has given 14 helpful results for the search "how to play pong on terminal". These are the recommended solutions for your problem, selecting from sources of help. Whenever a helpful result is detected, the system will add it to the list immediately. The latest ones have updated on 30th June 2021. According to our, the search "how to play pong on terminal" is quite common. Simultaneously, we also detect that many sites and sources also provide solutions and tips for it. So, with the aim of helping people out, we collect all here. Many people with the same problem as you appreciated these ways of fixing.

To run

To play the game, just compile it with gcc and link to the pthread library:

and then run the program to play:

If you can score over 100 you get a special prize 🙂

Reinforcement Learning Overview

Time for a few quick definitions. In Reinforcement Learning, an agent perceives its environment through observations and rewards, and acts upon it through actions.

The agent learns whichactions maximize the reward,

The agent learns whichactions maximize the reward, given what it learned from the environment.

More precisely, in our Pong case:

  • The agent is the Pong AI model we’re training.
  • The action is the output of our model: tells if the paddle should go up or down.
  • The environment is everything that determines the state of the game.
  • The observation is what the agent sees from the environment (here, the frames).
  • The reward is what our agent gains after taking an action in the environment (here losing -1 after missing the ball, +1 if the opponent missed the ball).
Note: The reward in our Pong environment will actually be zero for most of the steps in our environment, because no point is scored at that moment (i.e. no player misses the ball).

Preprocessing Frames

Currently, the frames received from openai are much larger than we need, with a much higher resolution than we need. See below:

We don’t need any of the white space at the bottom
We don’t need any of the white space at the bottom, or any pixel above the white stripe at the top. Also, we don’t need this to be in color.

Firstly, we crop the image so that only the important area is displayed. Next, we convert the image to grayscale. Finally, we resize the frame using cv2 with nearest-neighbor interpolation, then convert the image datatype to np.uint8. See the code below for implementation…

The resulting image looks like this:

Plotting the Results with TensorBoard and FloydHub

Go back to your FloydHub Workspace, and open the notebook train-with-log.ipynb. Then, run the notebook (click on the Run tab at the top of the workspace, then on Run all cells). Now, click on the TensorBoard link (inside the blue dashboard bar at the bottom of the screen).

Blue dashboard bar on FloydHub Workspace showing T
Blue dashboard bar on FloydHub Workspace showing TensorBoard and System Metrics

After clicking on the link, FloydHub automatically detects all the TensorBoard logs repositories. You should be seeing a window that looks like this (click on Wall to see the following acc and loss graph):

The notebook train-with-log.ipynb is just the old

The notebook train-with-log.ipynb is just the old train.ipynb with additional lines of code to log some variables and metrics.

With Keras, it involves creating a callback object:

And calling the tbCallBack object when training:

I also used an additional function tflog to easily keep track of variables not related to Keras, like the running_reward (=smoothed reward). After having imported the file containing the function, calling it is as simple as:

How to know what to choose among various suggestions given for How To Play Pong On Terminal?

The system can give more than one answer for How To Play Pong On Terminal, we also can’t say which the best one is. The best choice depends on the usefulness of each solution to each person. Normally, the ones that satisfy the majority will be on the top.

‘pong’ is not recognized

Solution 1

Make sure you installed pong globally. You do this by running the command

Notice the -g, that is what makes the install global.

Solution 2

If it is installed globally, make sure that the ‘npm’ is in your PATH. For instructions on how to do that, see this stackoverflow post.

Community QA

Add New Question Ask a Question 200 characters left Include your email address to get a message when this question is answered. Submit


Playing field too large

Solution 2

Run pong with a smaller field using the command