openai_ros.robot_envs package

Submodules

openai_ros.robot_envs.cartpole_env module

class openai_ros.robot_envs.cartpole_env.CartPoleEnv(control_type)[source]

Bases: openai_ros.robot_gazebo_env.RobotGazeboEnv

__init__(control_type)[source]
__module__ = 'openai_ros.robot_envs.cartpole_env'
_check_all_systems_ready(init=True)[source]
_env_setup(initial_qpos)[source]
_seed(seed=None)[source]
check_publishers_connection()[source]

Checks that all the publishers are working :return:

get_clock_time()[source]
init_internal_vars(init_pos_value)[source]
joints_callback(data)[source]
move_joints(joints_array)[source]

openai_ros.robot_envs.cube_rl_utils module

class openai_ros.robot_envs.cube_rl_utils.CubeRLUtils[source]

Bases: object

__dict__ = dict_proxy({'__module__': 'openai_ros.robot_envs.cube_rl_utils', 'check_publishers_connection': <function check_publishers_connection at 0x7fab24bdb050>, 'get_cube_state': <function get_cube_state at 0x7fab24bdb230>, '__weakref__': <attribute '__weakref__' of 'CubeRLUtils' objects>, 'move_joints': <function move_joints at 0x7fab24bdb1b8>, 'get_distance_from_point': <function get_distance_from_point at 0x7fab24bdb320>, 'observation_checks': <function observation_checks at 0x7fab24bdb2a8>, '__dict__': <attribute '__dict__' of 'CubeRLUtils' objects>, 'odom_callback': <function odom_callback at 0x7fab24bdb140>, 'get_reward_for_observations': <function get_reward_for_observations at 0x7fab24bdb398>, 'joints_callback': <function joints_callback at 0x7fab24bdb0c8>, '__doc__': None, '__init__': <function __init__ at 0x7fab24c81ed8>, 'check_all_sensors_ready': <function check_all_sensors_ready at 0x7fab24bd4f50>})
__init__()[source]
__module__ = 'openai_ros.robot_envs.cube_rl_utils'
__weakref__

list of weak references to the object (if defined)

check_all_sensors_ready()[source]
check_publishers_connection()[source]

Checks that all the publishers are working :return:

get_cube_state()[source]
get_distance_from_point(pstart, p_end)[source]

Given a Vector3 Object, get distance from current position :param p_end: :return:

get_reward_for_observations(state)[source]
joints_callback(data)[source]
move_joints(roll_speed)[source]
observation_checks(cube_state)[source]
odom_callback(data)[source]
openai_ros.robot_envs.cube_rl_utils.cube_rl_systems_test()[source]

openai_ros.robot_envs.cube_single_disk_env module

class openai_ros.robot_envs.cube_single_disk_env.CubeSingleDiskEnv[source]

Bases: openai_ros.robot_gazebo_env.RobotGazeboEnv

Superclass for all CubeSingleDisk environments.

__init__()[source]

Initializes a new CubeSingleDisk environment.

Args:

__module__ = 'openai_ros.robot_envs.cube_single_disk_env'
_check_all_sensors_ready()[source]
_check_all_systems_ready()[source]

Checks that all the sensors, publishers and other simulation systems are operational.

_check_joint_states_ready()[source]
_check_odom_ready()[source]
_check_publishers_connection()[source]

Checks that all the publishers are working :return:

_compute_reward(observations, done)[source]

Calculates the reward to give based on the observations given.

_get_obs()[source]
_init_env_variables()[source]

Inits variables needed to be initialised each time we reset at the start of an episode.

_is_done(observations)[source]

Checks if episode done based on observations given.

_joints_callback(data)[source]
_odom_callback(data)[source]
_set_action(action)[source]

Applies the given action to the simulation.

_set_init_pose()[source]

Sets the Robot in its init pose

get_joints()[source]
get_odom()[source]
move_joints(roll_speed)[source]
wait_until_roll_is_in_vel(velocity)[source]

openai_ros.robot_envs.hopper_env module

class openai_ros.robot_envs.hopper_env.HopperEnv[source]

Bases: openai_ros.robot_gazebo_env.RobotGazeboEnv

Superclass for all HopperEnv environments.

__init__()[source]

Initializes a new HopperEnv environment.

To check any topic we need to have the simulations running, we need to do two things: 1) Unpause the simulation: without that th stream of data doesnt flow. This is for simulations that are pause for whatever the reason 2) If the simulation was running already for some reason, we need to reset the controlers. This has to do with the fact that some plugins with tf, dont understand the reset of the simulation and need to be reseted to work properly.

The Sensors: The sensors accesible are the ones considered usefull for AI learning.

Sensor Topic List: * /drone/down_camera/image_raw: RGB Camera facing down. * /drone/front_camera/image_raw: RGB Camera facing front. * /drone/imu: IMU of the drone giving acceleration and orientation relative to world. * /drone/sonar: Sonar readings facing front * /drone/gt_pose: Get position and orientation in Global space * /drone/gt_vel: Get the linear velocity , the angular doesnt record anything.

Actuators Topic List: * /cmd_vel: Move the Drone Around when you have taken off. * /drone/takeoff: Publish into it to take off * /drone/land: Publish to make ParrotDrone Land

Args:

__module__ = 'openai_ros.robot_envs.hopper_env'
_check_all_publishers_ready()[source]

Checks that all the publishers are working :return:

_check_all_sensors_ready()[source]
_check_all_systems_ready()[source]

Checks that all the sensors, publishers and other simulation systems are operational.

_check_imu_ready()[source]
_check_joint_states_ready()[source]
_check_lowerleg_contactsensor_state_ready()[source]
_check_odom_ready()[source]
_check_pub_connection(publisher_object)[source]
_compute_reward(observations, done)[source]

Calculates the reward to give based on the observations given.

_contact_callback(data)[source]
_get_obs()[source]
_imu_callback(data)[source]
_init_env_variables()[source]

Inits variables needed to be initialised each time we reset at the start of an episode.

_is_done(observations)[source]

Checks if episode done based on observations given.

_joints_state_callback(data)[source]
_odom_callback(data)[source]
_set_action(action)[source]

Applies the given action to the simulation.

_set_init_pose()[source]

Sets the Robot in its init pose

check_array_similar(ref_value_array, check_value_array, epsilon)[source]

It checks if the check_value id similar to the ref_value

get_imu()[source]
get_joint_states()[source]
get_lowerleg_contactsensor_state()[source]
get_odom()[source]
move_joints(joints_array, epsilon=0.05, update_rate=10)[source]

It will move the Hopper Joints to the given Joint_Array values

wait_time_for_execute_movement(joints_array, epsilon, update_rate)[source]

We wait until Joints are where we asked them to be based on the joints_states :param joints_array:Joints Values in radians of each of the three joints of hopper leg. :param epsilon: Error acceptable in odometry readings. :param update_rate: Rate at which we check the joint_states. :return:

openai_ros.robot_envs.husarion_env module

class openai_ros.robot_envs.husarion_env.HusarionEnv[source]

Bases: openai_ros.robot_gazebo_env.RobotGazeboEnv

Superclass for all CubeSingleDisk environments.

__init__()[source]

Initializes a new HusarionEnv environment. Husarion doesnt use controller_manager, therefore we wont reset the controllers in the standard fashion. For the moment we wont reset them.

To check any topic we need to have the simulations running, we need to do two things: 1) Unpause the simulation: without that th stream of data doesnt flow. This is for simulations that are pause for whatever the reason 2) If the simulation was running already for some reason, we need to reset the controlers. This has to do with the fact that some plugins with tf, dont understand the reset of the simulation and need to be reseted to work properly.

The Sensors: The sensors accesible are the ones considered usefull for AI learning.

Sensor Topic List: * /odom : Odometry readings of the Base of the Robot * /camera/depth/image_raw: 2d Depth image of the depth sensor. * /camera/depth/points: Pointcloud sensor readings * /camera/rgb/image_raw: RGB camera * /rosbot/laser/scan: Laser Readings

Actuators Topic List: /cmd_vel,

Args:

__module__ = 'openai_ros.robot_envs.husarion_env'
_camera_depth_image_raw_callback(data)[source]
_camera_depth_points_callback(data)[source]
_camera_rgb_image_raw_callback(data)[source]
_check_all_sensors_ready()[source]
_check_all_systems_ready()[source]

Checks that all the sensors, publishers and other simulation systems are operational.

_check_camera_depth_image_raw_ready()[source]
_check_camera_depth_points_ready()[source]
_check_camera_rgb_image_raw_ready()[source]
_check_laser_scan_ready()[source]
_check_odom_ready()[source]
_check_publishers_connection()[source]

Checks that all the publishers are working :return:

_compute_reward(observations, done)[source]

Calculates the reward to give based on the observations given.

_get_obs()[source]
_init_env_variables()[source]

Inits variables needed to be initialised each time we reset at the start of an episode.

_is_done(observations)[source]

Checks if episode done based on observations given.

_laser_scan_callback(data)[source]
_odom_callback(data)[source]
_set_action(action)[source]

Applies the given action to the simulation.

_set_init_pose()[source]

Sets the Robot in its init pose

check_angular_speed_dir(angular_speed, angular_speed_noise)[source]

It States if the speed is zero, posititive or negative

get_camera_depth_image_raw()[source]
get_camera_depth_points()[source]
get_camera_rgb_image_raw()[source]
get_laser_scan()[source]
get_odom()[source]
move_base(linear_speed, angular_speed, epsilon=0.05, update_rate=10)[source]

It will move the base based on the linear and angular speeds given. It will wait untill those twists are achived reading from the odometry topic. :param linear_speed: Speed in the X axis of the robot base frame :param angular_speed: Speed of the angular turning of the robot base frame :param epsilon: Acceptable difference between the speed asked and the odometry readings :param update_rate: Rate at which we check the odometry. :return:

wait_until_twist_achieved(cmd_vel_value, epsilon, update_rate, angular_speed_noise=0.005)[source]

We wait for the cmd_vel twist given to be reached by the robot reading Bare in mind that the angular wont be controled , because its too imprecise. We will only consider to check if its moving or not inside the angular_speed_noise fluctiations it has. from the odometry. :param cmd_vel_value: Twist we want to wait to reach. :param epsilon: Error acceptable in odometry readings. :param update_rate: Rate at which we check the odometry. :return:

openai_ros.robot_envs.parrotdrone_env module

class openai_ros.robot_envs.parrotdrone_env.ParrotDroneEnv[source]

Bases: openai_ros.robot_gazebo_env.RobotGazeboEnv

Superclass for all CubeSingleDisk environments.

__init__()[source]

Initializes a new ParrotDroneEnv environment.

To check any topic we need to have the simulations running, we need to do two things: 1) Unpause the simulation: without that th stream of data doesnt flow. This is for simulations that are pause for whatever the reason 2) If the simulation was running already for some reason, we need to reset the controlers. This has to do with the fact that some plugins with tf, dont understand the reset of the simulation and need to be reseted to work properly.

The Sensors: The sensors accesible are the ones considered usefull for AI learning.

Sensor Topic List: * /drone/down_camera/image_raw: RGB Camera facing down. * /drone/front_camera/image_raw: RGB Camera facing front. * /drone/imu: IMU of the drone giving acceleration and orientation relative to world. * /drone/sonar: Sonar readings facing front * /drone/gt_pose: Get position and orientation in Global space * /drone/gt_vel: Get the linear velocity , the angular doesnt record anything.

Actuators Topic List: * /cmd_vel: Move the Drone Around when you have taken off. * /drone/takeoff: Publish into it to take off * /drone/land: Publish to make ParrotDrone Land

Args:

__module__ = 'openai_ros.robot_envs.parrotdrone_env'
_check_all_publishers_ready()[source]

Checks that all the publishers are working :return:

_check_all_sensors_ready()[source]
_check_all_systems_ready()[source]

Checks that all the sensors, publishers and other simulation systems are operational.

_check_cmd_vel_pub_connection()[source]
_check_down_camera_rgb_image_raw_ready()[source]
_check_front_camera_rgb_image_raw_ready()[source]
_check_gt_pose_ready()[source]
_check_gt_vel_ready()[source]
_check_imu_ready()[source]
_check_land_pub_connection()[source]
_check_sonar_ready()[source]
_check_takeoff_pub_connection()[source]
_compute_reward(observations, done)[source]

Calculates the reward to give based on the observations given.

_down_camera_rgb_image_raw_callback(data)[source]
_front_camera_rgb_image_raw_callback(data)[source]
_get_obs()[source]
_gt_pose_callback(data)[source]
_gt_vel_callback(data)[source]
_imu_callback(data)[source]
_init_env_variables()[source]

Inits variables needed to be initialised each time we reset at the start of an episode.

_is_done(observations)[source]

Checks if episode done based on observations given.

_set_action(action)[source]

Applies the given action to the simulation.

_set_init_pose()[source]

Sets the Robot in its init pose

_sonar_callback(data)[source]
check_array_similar(ref_value_array, check_value_array, epsilon)[source]

It checks if the check_value id similar to the ref_value

get_down_camera_rgb_image_raw()[source]
get_front_camera_rgb_image_raw()[source]
get_gt_pose()[source]
get_gt_vel()[source]
get_imu()[source]
get_sonar()[source]
land()[source]

Sends the Landing command and checks it has landed It unpauses the simulation and pauses again to allow it to be a self contained action

move_base(linear_speed_vector, angular_speed, epsilon=0.05, update_rate=10)[source]

It will move the base based on the linear and angular speeds given. It will wait untill those twists are achived reading from the odometry topic. :param linear_speed_vector: Speed in the XYZ axis of the robot base frame, because drones can move in any direction :param angular_speed: Speed of the angular turning of the robot base frame, because this drone only turns on the Z axis. :param epsilon: Acceptable difference between the speed asked and the odometry readings :param update_rate: Rate at which we check the odometry. :return:

takeoff()[source]

Sends the takeoff command and checks it has taken of It unpauses the simulation and pauses again to allow it to be a self contained action

wait_for_height(heigh_value_to_check, smaller_than, epsilon, update_rate)[source]

Checks if current height is smaller or bigger than a value :param: smaller_than: If True, we will wait until value is smaller than the one given

wait_time_for_execute_movement()[source]

Because this Parrot Drone position is global, we really dont have a way to know if its moving in the direction desired, because it would need to evaluate the diference in position and speed on the local reference.

wait_until_twist_achieved(cmd_vel_value, epsilon, update_rate)[source]

# TODO: Make it work using TF conversions We wait for the cmd_vel twist given to be reached by the robot reading from the odometry. :param cmd_vel_value: Twist we want to wait to reach. :param epsilon: Error acceptable in odometry readings. :param update_rate: Rate at which we check the odometry. :return:

openai_ros.robot_envs.sawyer_env module

class openai_ros.robot_envs.sawyer_env.SawyerEnv[source]

Bases: openai_ros.robot_gazebo_env.RobotGazeboEnv

Superclass for all SawyerEnv environments.

__init__()[source]

Initializes a new SawyerEnv environment.

To check any topic we need to have the simulations running, we need to do two things: 1) Unpause the simulation: without that th stream of data doesnt flow. This is for simulations that are pause for whatever the reason 2) If the simulation was running already for some reason, we need to reset the controlers. This has to do with the fact that some plugins with tf, dont understand the reset of the simulation and need to be reseted to work properly.

The Sensors: The sensors accesible are the ones considered usefull for AI learning.

Sensor Topic List: * /robot/joint_limits: Odometry of the Base of Wamv

Actuators Topic List: * As actuator we will use a class to interface with the movements through commands.

Args:

__module__ = 'openai_ros.robot_envs.sawyer_env'
_check_all_sensors_ready()[source]
_check_all_systems_ready()[source]

Checks that all the sensors, publishers and other simulation systems are operational.

_check_head_camera_image_raw_ready()[source]
_check_right_hand_camera_image_raw_ready()[source]
_compute_reward(observations, done)[source]

Calculates the reward to give based on the observations given.

_get_obs()[source]
_head_camera_image_raw_callback(data)[source]
_init_env_variables()[source]

Inits variables needed to be initialised each time we reset at the start of an episode.

_is_done(observations)[source]

Checks if episode done based on observations given.

_map_actions_to_movement(side='right', joint_delta=0.1)[source]
_right_hand_camera_image_raw_callback(data)[source]
_set_action(action)[source]

Applies the given action to the simulation.

_set_init_pose()[source]

Sets the Robot in its init pose

_setup_movement_system()[source]

Setup of the movement system. :return:

_setup_tf_listener()[source]

Set ups the TF listener for getting the transforms you ask for.

check_joint_limits_ready()[source]
execute_movement(action_id)[source]

It executed the command given through an id. This will move any joint of Sawyer, including the gripper if it has it. :param: action_id: These are the possible action_id values and the action asociated.

self.joints[0]+”_increase”, self.joints[0]+_decrease, self.joints[1]+”_increase”, self.joints[1]+”_decrease”, self.joints[2]+”_increase”, self.joints[2]+”_decrease”, self.joints[3]+”_increase”, self.joints[3]+”_decrease”, self.joints[4]+”_increase”, self.joints[4]+”_decrease”, self.joints[5]+”_increase”, self.joints[5]+”_decrease”, self.joints[6]+”_increase”, self.joints[6]+”_decrease”, gripper_close, gripper_open, gripper_calibrate

get_all_limb_joint_angles()[source]

Return dictionary dict({str:float}) with all the joints angles

get_all_limb_joint_efforts()[source]

Returns a dictionary dict({str:float}) with all the joints efforts

get_head_camera_image_raw()[source]
get_joint_limits()[source]
get_limb_joint_names_array()[source]

Returns the Joint Names array of the Limb.

get_right_hand_camera_image_raw()[source]
get_tf_start_to_end_frames(start_frame_name, end_frame_name)[source]

Given two frames, it returns the transform from the start_frame_name to the end_frame_name. It will only return something different to None if the TFs of the Two frames are in TF topic published and are connected through the TF tree. :param: start_frame_name: Start Frame of the TF transform

end_frame_name: End Frame of the TF transform
Returns:trans,rot of the transform between the start and end frames.
init_joint_limits()[source]

Get the Joint Limits, in the init fase where we need to unpause the simulation to get them :return: joint_limits: The Joint Limits Dictionary, with names, angles, vel and effort limits.

move_joints_to_angle_blocking(joint_positions_dict, timeout=15.0, threshold=0.008726646)[source]

It moves all the joints to the given position and doesnt exit until it reaches that position

set_g(action)[source]
set_j(joint_name, delta)[source]

openai_ros.robot_envs.shadow_tc_env module

class openai_ros.robot_envs.shadow_tc_env.ShadowTcEnv[source]

Bases: openai_ros.robot_gazebo_env.RobotGazeboEnv

Superclass for all ShadowTcEnv environments.

__init__()[source]

Initializes a new ShadowTcEnv environment.

To check any topic we need to have the simulations running, we need to do two things: 1) Unpause the simulation: without that th stream of data doesnt flow. This is for simulations that are pause for whatever the reason 2) If the simulation was running already for some reason, we need to reset the controlers. This has to do with the fact that some plugins with tf, dont understand the reset of the simulation and need to be reseted to work properly.

The Sensors: The sensors accesible are the ones considered usefull for AI learning.

Sensor Topic List: * /imu/data * /joint_states

Actuators Topic List: * As actuator we will use a class SmartGrasper to interface. We use smart_grasping_sandbox smart_grasper.py, to move and get the pose of the ball and the tool tip.

Args:

__module__ = 'openai_ros.robot_envs.shadow_tc_env'
_check_all_sensors_ready()[source]
_check_all_systems_ready()[source]

Checks that all the sensors, publishers and other simulation systems are operational.

_check_imu_ready()[source]
_check_joint_states_ready()[source]
_check_planning_scene_ready()[source]
_compute_reward(observations, done)[source]

Calculates the reward to give based on the observations given.

_get_obs()[source]
_imu_callback(data)[source]
_init_env_variables()[source]

Inits variables needed to be initialised each time we reset at the start of an episode.

_is_done(observations)[source]

Checks if episode done based on observations given.

_joints_state_callback(data)[source]
_planning_scene_callback(data)[source]
_set_action(action)[source]

Applies the given action to the simulation.

_set_init_pose()[source]

Sets the Robot in its init pose

_setup_smart_grasper()[source]

Setup of the movement system. :return:

_setup_tf_listener()[source]

Set ups the TF listener for getting the transforms you ask for.

close_hand()[source]

When called it closes robots hand

get_ball_pose()[source]

Get Ball Pose return: Ball Pose in the World frame We unpause and pause the simulation because this calss is a service call. This means that if the simulation is NOT running it wont get the Ball information of position.

get_fingers_colision(object_collision_name)[source]

Returns the collision of the three fingers object_collision_name: Here yo ustate the name of the model to check collision with fingers. Objects in sim: cricket_ball__link, drill__link

get_imu()[source]
get_joint_states()[source]
get_tip_pose()[source]

Returns the pose of the tip of the TCP We unpause and pause the simulation because this calss is a service call. This means that if the simulation is NOT running it wont get the TCP information of position.

move_tcp_world_frame(desired_pose)[source]

Moves the Tool tip TCP to the pose given Its relative pose to world frame :param: desired_pose: Pose where you want the TCP to move next

move_tip(x=0.0, y=0.0, z=0.0, roll=0.0, pitch=0.0, yaw=0.0)[source]

Moves that increment of XYZ RPY in the world frame Only state the increment of the variable you want, the rest will not increment due to the default values

open_hand()[source]

When called it opens robots hand

reset_scene()[source]

Restarts the simulation and world objects

send_movement_command(command, duration=0.2)[source]

Send a dictionnary of joint targets to the arm and hand directly. To get the available joints names: rostopic echo /joint_states/name -n1 [H1_F1J1, H1_F1J2, H1_F1J3, H1_F2J1, H1_F2J2, H1_F2J3, H1_F3J1, H1_F3J2, H1_F3J3, elbow_joint, shoulder_lift_joint, shoulder_pan_joint, wrist_1_joint, wrist_2_joint, wrist_3_joint]

Parameters:
  • command – a dictionnary of joint names associated with a target: {“H1_F1J1”: -1.0, “shoulder_pan_joint”: 1.0}
  • duration – the amount of time it will take to get there in seconds. Needs to be bigger than 0.0
set_fingers_colision(activate=False)[source]

It activates or deactivates the finger collisions. It also will triguer the publish into the planning_scene the collisions. We puase and unpause for the smae exact reason as the get TCP and get ball pos. Being a service, untill the simulation is unpaused it wont get response.

openai_ros.robot_envs.sumitxl_env module

class openai_ros.robot_envs.sumitxl_env.SumitXlEnv[source]

Bases: openai_ros.robot_gazebo_env.RobotGazeboEnv

Superclass for all CubeSingleDisk environments.

__init__()[source]

Initializes a new SumitXlEnv environment.

Execute a call to service /summit_xl/controller_manager/list_controllers To get the list of controllers to be restrarted

To check any topic we need to have the simulations running, we need to do two things: 1) Unpause the simulation: without that th stream of data doesnt flow. This is for simulations that are pause for whatever the reason 2) If the simulation was running already for some reason, we need to reset the controlers. This has to do with the fact that some plugins with tf, dont understand the reset of the simulation and need to be reseted to work properly.

The Sensors: The sensors accesible are the ones considered usefull for AI learning.

Sensor Topic List: * /gps/fix : GPS position Data * /gps/fix_velocity: GPS Speed data * /hokuyo_base/scan: Laser Readings * /imu/data: Inertial Mesurment Unit data, orientation and acceleration * /orbbec_astra/depth/image_raw * /orbbec_astra/depth/points * /orbbec_astra/rgb/image_raw * /summit_xl/odom: Odometry

Actuators Topic List: /cmd_vel,

Args:

__module__ = 'openai_ros.robot_envs.sumitxl_env'
_camera_depth_image_raw_callback(data)[source]
_camera_depth_points_callback(data)[source]
_camera_rgb_image_raw_callback(data)[source]
_check_all_sensors_ready()[source]
_check_all_systems_ready()[source]

Checks that all the sensors, publishers and other simulation systems are operational.

_check_camera_depth_image_raw_ready()[source]
_check_camera_depth_points_ready()[source]
_check_camera_rgb_image_raw_ready()[source]
_check_gps_fix_ready()[source]
_check_gps_fix_velocity_ready()[source]
_check_imu_ready()[source]
_check_laser_scan_ready()[source]
_check_odom_ready()[source]
_check_publishers_connection()[source]

Checks that all the publishers are working :return:

_compute_reward(observations, done)[source]

Calculates the reward to give based on the observations given.

_get_obs()[source]
_gps_fix_callback(data)[source]
_gps_fix_velocity_callback(data)[source]
_imu_callback(data)[source]
_init_env_variables()[source]

Inits variables needed to be initialised each time we reset at the start of an episode.

_is_done(observations)[source]

Checks if episode done based on observations given.

_laser_scan_callback(data)[source]
_odom_callback(data)[source]
_set_action(action)[source]

Applies the given action to the simulation.

_set_init_pose()[source]

Sets the Robot in its init pose

get_camera_depth_image_raw()[source]
get_camera_depth_points()[source]
get_camera_rgb_image_raw()[source]
get_gps_fix()[source]
get_gps_fix_velocity()[source]
get_imu()[source]
get_laser_scan()[source]
get_odom()[source]
move_base(linear_speed, angular_speed, epsilon=0.05, update_rate=10)[source]

It will move the base based on the linear and angular speeds given. It will wait untill those twists are achived reading from the odometry topic. :param linear_speed: Speed in the X axis of the robot base frame :param angular_speed: Speed of the angular turning of the robot base frame :param epsilon: Acceptable difference between the speed asked and the odometry readings :param update_rate: Rate at which we check the odometry. :return:

wait_until_twist_achieved(cmd_vel_value, epsilon, update_rate)[source]

We wait for the cmd_vel twist given to be reached by the robot reading from the odometry. :param cmd_vel_value: Twist we want to wait to reach. :param epsilon: Error acceptable in odometry readings. :param update_rate: Rate at which we check the odometry. :return:

openai_ros.robot_envs.turtlebot2_env module

class openai_ros.robot_envs.turtlebot2_env.TurtleBot2Env[source]

Bases: openai_ros.robot_gazebo_env.RobotGazeboEnv

Superclass for all CubeSingleDisk environments.

__init__()[source]

Initializes a new TurtleBot2Env environment. Turtlebot2 doesnt use controller_manager, therefore we wont reset the controllers in the standard fashion. For the moment we wont reset them.

To check any topic we need to have the simulations running, we need to do two things: 1) Unpause the simulation: without that th stream of data doesnt flow. This is for simulations that are pause for whatever the reason 2) If the simulation was running already for some reason, we need to reset the controlers. This has to do with the fact that some plugins with tf, dont understand the reset of the simulation and need to be reseted to work properly.

The Sensors: The sensors accesible are the ones considered usefull for AI learning.

Sensor Topic List: * /odom : Odometry readings of the Base of the Robot * /camera/depth/image_raw: 2d Depth image of the depth sensor. * /camera/depth/points: Pointcloud sensor readings * /camera/rgb/image_raw: RGB camera * /kobuki/laser/scan: Laser Readings

Actuators Topic List: /cmd_vel,

Args:

__module__ = 'openai_ros.robot_envs.turtlebot2_env'
_camera_depth_image_raw_callback(data)[source]
_camera_depth_points_callback(data)[source]
_camera_rgb_image_raw_callback(data)[source]
_check_all_sensors_ready()[source]
_check_all_systems_ready()[source]

Checks that all the sensors, publishers and other simulation systems are operational.

_check_camera_depth_image_raw_ready()[source]
_check_camera_depth_points_ready()[source]
_check_camera_rgb_image_raw_ready()[source]
_check_laser_scan_ready()[source]
_check_odom_ready()[source]
_check_publishers_connection()[source]

Checks that all the publishers are working :return:

_compute_reward(observations, done)[source]

Calculates the reward to give based on the observations given.

_get_obs()[source]
_init_env_variables()[source]

Inits variables needed to be initialised each time we reset at the start of an episode.

_is_done(observations)[source]

Checks if episode done based on observations given.

_laser_scan_callback(data)[source]
_odom_callback(data)[source]
_set_action(action)[source]

Applies the given action to the simulation.

_set_init_pose()[source]

Sets the Robot in its init pose

get_camera_depth_image_raw()[source]
get_camera_depth_points()[source]
get_camera_rgb_image_raw()[source]
get_laser_scan()[source]
get_odom()[source]
move_base(linear_speed, angular_speed, epsilon=0.05, update_rate=10)[source]

It will move the base based on the linear and angular speeds given. It will wait untill those twists are achived reading from the odometry topic. :param linear_speed: Speed in the X axis of the robot base frame :param angular_speed: Speed of the angular turning of the robot base frame :param epsilon: Acceptable difference between the speed asked and the odometry readings :param update_rate: Rate at which we check the odometry. :return:

wait_until_twist_achieved(cmd_vel_value, epsilon, update_rate)[source]

We wait for the cmd_vel twist given to be reached by the robot reading from the odometry. :param cmd_vel_value: Twist we want to wait to reach. :param epsilon: Error acceptable in odometry readings. :param update_rate: Rate at which we check the odometry. :return:

openai_ros.robot_envs.turtlebot3_env module

class openai_ros.robot_envs.turtlebot3_env.TurtleBot3Env[source]

Bases: openai_ros.robot_gazebo_env.RobotGazeboEnv

Superclass for all CubeSingleDisk environments.

__init__()[source]

Initializes a new TurtleBot3Env environment. TurtleBot3 doesnt use controller_manager, therefore we wont reset the controllers in the standard fashion. For the moment we wont reset them.

To check any topic we need to have the simulations running, we need to do two things: 1) Unpause the simulation: without that th stream of data doesnt flow. This is for simulations that are pause for whatever the reason 2) If the simulation was running already for some reason, we need to reset the controlers. This has to do with the fact that some plugins with tf, dont understand the reset of the simulation and need to be reseted to work properly.

The Sensors: The sensors accesible are the ones considered usefull for AI learning.

Sensor Topic List: * /odom : Odometry readings of the Base of the Robot * /imu: Inertial Mesuring Unit that gives relative accelerations and orientations. * /scan: Laser Readings

Actuators Topic List: /cmd_vel,

Args:

__module__ = 'openai_ros.robot_envs.turtlebot3_env'
_check_all_sensors_ready()[source]
_check_all_systems_ready()[source]

Checks that all the sensors, publishers and other simulation systems are operational.

_check_imu_ready()[source]
_check_laser_scan_ready()[source]
_check_odom_ready()[source]
_check_publishers_connection()[source]

Checks that all the publishers are working :return:

_compute_reward(observations, done)[source]

Calculates the reward to give based on the observations given.

_get_obs()[source]
_imu_callback(data)[source]
_init_env_variables()[source]

Inits variables needed to be initialised each time we reset at the start of an episode.

_is_done(observations)[source]

Checks if episode done based on observations given.

_laser_scan_callback(data)[source]
_odom_callback(data)[source]
_set_action(action)[source]

Applies the given action to the simulation.

_set_init_pose()[source]

Sets the Robot in its init pose

get_imu()[source]
get_laser_scan()[source]
get_odom()[source]
move_base(linear_speed, angular_speed, epsilon=0.05, update_rate=10)[source]

It will move the base based on the linear and angular speeds given. It will wait untill those twists are achived reading from the odometry topic. :param linear_speed: Speed in the X axis of the robot base frame :param angular_speed: Speed of the angular turning of the robot base frame :param epsilon: Acceptable difference between the speed asked and the odometry readings :param update_rate: Rate at which we check the odometry. :return:

wait_until_twist_achieved(cmd_vel_value, epsilon, update_rate)[source]

We wait for the cmd_vel twist given to be reached by the robot reading from the odometry. :param cmd_vel_value: Twist we want to wait to reach. :param epsilon: Error acceptable in odometry readings. :param update_rate: Rate at which we check the odometry. :return:

openai_ros.robot_envs.wamv_env module

class openai_ros.robot_envs.wamv_env.WamvEnv[source]

Bases: openai_ros.robot_gazebo_env.RobotGazeboEnv

Superclass for all WamvEnv environments.

__init__()[source]

Initializes a new WamvEnv environment.

To check any topic we need to have the simulations running, we need to do two things: 1) Unpause the simulation: without that th stream of data doesnt flow. This is for simulations that are pause for whatever the reason 2) If the simulation was running already for some reason, we need to reset the controlers. This has to do with the fact that some plugins with tf, dont understand the reset of the simulation and need to be reseted to work properly.

The Sensors: The sensors accesible are the ones considered usefull for AI learning.

Sensor Topic List: * /wamv/odom: Odometry of the Base of Wamv

Actuators Topic List: * /cmd_drive: You publish the speed of the left and right propellers.

Args:

__module__ = 'openai_ros.robot_envs.wamv_env'
_check_all_publishers_ready()[source]

Checks that all the publishers are working :return:

_check_all_sensors_ready()[source]
_check_all_systems_ready()[source]

Checks that all the sensors, publishers and other simulation systems are operational.

_check_odom_ready()[source]
_check_pub_connection(publisher_object)[source]
_compute_reward(observations, done)[source]

Calculates the reward to give based on the observations given.

_get_obs()[source]
_init_env_variables()[source]

Inits variables needed to be initialised each time we reset at the start of an episode.

_is_done(observations)[source]

Checks if episode done based on observations given.

_odom_callback(data)[source]
_set_action(action)[source]

Applies the given action to the simulation.

_set_init_pose()[source]

Sets the Robot in its init pose

get_odom()[source]
set_propellers_speed(right_propeller_speed, left_propeller_speed, time_sleep=1.0)[source]

It will set the speed of each of the two proppelers of wamv.

wait_time_for_execute_movement(time_sleep)[source]

Because this Wamv position is global, we really dont have a way to know if its moving in the direction desired, because it would need to evaluate the diference in position and speed on the local reference.

Module contents