openai_ros.task_envs.sumit_xl package


openai_ros.task_envs.sumit_xl.sumit_xl_room module

class openai_ros.task_envs.sumit_xl.sumit_xl_room.SumitXlRoom[source]

Bases: openai_ros.robot_envs.sumitxl_env.SumitXlEnv


This Task Env is designed for having the sumit_xl in the sumit_xl world closed room with columns. It will learn how to move around without crashing.

__module__ = 'openai_ros.task_envs.sumit_xl.sumit_xl_room'
_compute_reward(observations, done)[source]

We give reward to the robot when it gets closer to the desired point. We Dont give it contsnatly, but only if there is an improvement


Here we define what sensor data defines our robots observations To know which Variables we have acces to, we need to read the SumitXlEnv API DOCS

WALL CLOSE LEFT [1, 1, 9, 0, 0, 0, -1.8, 0.46, 0.01] WALL CLOSE RIGHT [0, 0, 0, 10, 1, 2, -1.8, -0.61, 0.01] WALL BACK [0, 9, 1, 1, 6, 0, -1.8, -0.54, 1.59] WALL FRONT [2, 9, 0, 0, 2, 2, -1.83, 0.51, 1.58]

0 in reality is arround front 0.4, back 0.5, sides 0.3


Inits variables needed to be initialised each time we reset at the start of an episode. :return:


This set action will Set the linear and angular speed of the SumitXl based on the action number given. :param action: The action integer that set s what movement to do next.


Sets the Robot in its init pose

discretize_scan_observation(data, new_ranges)[source]

Discards all the laser readings that are not multiple in index of new_ranges value.


Calculates the distance from the current position to the desired point :param start_point: :return:

get_distance_from_point(pstart, p_end)[source]

Given a Vector3 Object, get distance from current position :param p_end: :return:


It calculated the magnitude of the Vector3 given. This is usefull for reading imu accelerations and knowing if there has been a crash :return:

is_in_desired_position(current_position, epsilon=0.05)[source]

It return True if the current position is similar to the desired poistion

Module contents