Summit XL Environment

Summit_XL was created by the great Robotnik company. The strong mechanical structure allows to carry much heavier loads and used for research, surveillance, remote monitoring and military applications.

Real Summit XL

The simulation is inside a more office like environment.

Simulated Summit XL

Robot Environment

openai_ros.robot_envs.sumitxl_env module

class openai_ros.robot_envs.sumitxl_env.SumitXlEnv[source]

Bases: openai_ros.robot_gazebo_env.RobotGazeboEnv

Superclass for all CubeSingleDisk environments.

__init__()[source]

Initializes a new SumitXlEnv environment.

Execute a call to service /summit_xl/controller_manager/list_controllers To get the list of controllers to be restrarted

To check any topic we need to have the simulations running, we need to do two things: 1) Unpause the simulation: without that th stream of data doesnt flow. This is for simulations that are pause for whatever the reason 2) If the simulation was running already for some reason, we need to reset the controlers. This has to do with the fact that some plugins with tf, dont understand the reset of the simulation and need to be reseted to work properly.

The Sensors: The sensors accesible are the ones considered usefull for AI learning.

Sensor Topic List: * /gps/fix : GPS position Data * /gps/fix_velocity: GPS Speed data * /hokuyo_base/scan: Laser Readings * /imu/data: Inertial Mesurment Unit data, orientation and acceleration * /orbbec_astra/depth/image_raw * /orbbec_astra/depth/points * /orbbec_astra/rgb/image_raw * /summit_xl/odom: Odometry

Actuators Topic List: /cmd_vel,

Args:

__module__ = 'openai_ros.robot_envs.sumitxl_env'
_camera_depth_image_raw_callback(data)[source]
_camera_depth_points_callback(data)[source]
_camera_rgb_image_raw_callback(data)[source]
_check_all_sensors_ready()[source]
_check_all_systems_ready()[source]

Checks that all the sensors, publishers and other simulation systems are operational.

_check_camera_depth_image_raw_ready()[source]
_check_camera_depth_points_ready()[source]
_check_camera_rgb_image_raw_ready()[source]
_check_gps_fix_ready()[source]
_check_gps_fix_velocity_ready()[source]
_check_imu_ready()[source]
_check_laser_scan_ready()[source]
_check_odom_ready()[source]
_check_publishers_connection()[source]

Checks that all the publishers are working :return:

_compute_reward(observations, done)[source]

Calculates the reward to give based on the observations given.

_get_obs()[source]
_gps_fix_callback(data)[source]
_gps_fix_velocity_callback(data)[source]
_imu_callback(data)[source]
_init_env_variables()[source]

Inits variables needed to be initialised each time we reset at the start of an episode.

_is_done(observations)[source]

Checks if episode done based on observations given.

_laser_scan_callback(data)[source]
_odom_callback(data)[source]
_set_action(action)[source]

Applies the given action to the simulation.

_set_init_pose()[source]

Sets the Robot in its init pose

get_camera_depth_image_raw()[source]
get_camera_depth_points()[source]
get_camera_rgb_image_raw()[source]
get_gps_fix()[source]
get_gps_fix_velocity()[source]
get_imu()[source]
get_laser_scan()[source]
get_odom()[source]
move_base(linear_speed, angular_speed, epsilon=0.05, update_rate=10)[source]

It will move the base based on the linear and angular speeds given. It will wait untill those twists are achived reading from the odometry topic. :param linear_speed: Speed in the X axis of the robot base frame :param angular_speed: Speed of the angular turning of the robot base frame :param epsilon: Acceptable difference between the speed asked and the odometry readings :param update_rate: Rate at which we check the odometry. :return:

wait_until_twist_achieved(cmd_vel_value, epsilon, update_rate)[source]

We wait for the cmd_vel twist given to be reached by the robot reading from the odometry. :param cmd_vel_value: Twist we want to wait to reach. :param epsilon: Error acceptable in odometry readings. :param update_rate: Rate at which we check the odometry. :return: