Posts tagged Autonomous Vehicles

How can Perception be made? Neuronal Net vs Geometrical Analysis of Camera Signal

In the past, a lot of automobile manufacturers were working on realizing the concept of autonomous vehicles. But this concept became a hot topic when Teslaโ€™s CEO, Elon Musk, announced a crash program to bring autonomous vehicles on roads. From there, the fierce competition began to roll on. But ground realities suggest that people are still reluctant to buy any such vehicles. There are plenty of studies and articles that show that selling autonomous vehicles is a much bigger challenge than manufacturing them. Even those who lead this project, such as Tesla, is facing many problems in selling AVs to the potential customers. The main reason lies in the reluctance of people to trust their autonomous operations. They are still unable to understand the numerous perceptions that are associated with autonomous vehicles. Letโ€™s have a look at the way, perceptions are made in the AVs, and letโ€™s compare two main algorithms to see how decision making is done.

How are Perceptions made?

The perceptions in autonomous vehicles are made in the same way as that by humans in conventional cars. Human sensory abilities are replicated in autonomous vehicles with various sensors such as Radars, LiDARs, Ultrasonic sensors, Sonars, GPS, and multiple cameras. These are the sources from which perceptions are made. It must be noted that it is not mandatory that every autonomous vehicle manufacturer will use these sensors. Therefore, it is quite possible that some manufacturers may use some of them or employ some other sensors to handle perceptions. But, generally, the above sensors are some of the most commonly used sources for building perceptions in autonomous vehicles.

The data received from these perceptions are then fed to the interpreter, which then translates this data into the information-oriented language. This interpreted information is then used to make certain decisions and to drive the autonomous vehicle subsequently.

Among all of the sensory equipment, the camera is considered as the most vital part of autonomous vehicles. The reason lies in its being a visual instrument. It has the ability to replicate the human eye and can identify objects in a vision as of humans. But information processing is not just as simple as capturing the image and making decisions. Many intelligent and deep learning techniques are used to fine-tune the visuals obtained from the cameras so as to embed intelligence in autonomous vehicles.

Following are the two key techniques which are used to process and analyse camera visuals in an autonomous vehicle:

Geometrical Analysis

While driving in an urban environment, adherence to the road-lane conditions is mandatory. It ensures safe traveling and encourages fewer road accidents. From the human point of view, such adherence can be ensured by making them forced to follow lane rules. But when it comes to autonomous vehicles, human-less interventions make it difficult to comply with these rules. However, for such compliance, geometrical analysis can prove to be helpful.

With the help of forward-looking cameras, images, and data about the longitudinal markings of the road can be extracted. The successful extraction of such perceptions enables the system to gather information pertaining to the location of the vehicle with respect to the boundary of the road-lane. Furthermore, information about the lateral distances of the vehicle enables the system to determine the right and left boundaries of the road-lane. Thus, enabling the autonomous vehicle to keep in-lane. Such geometric data is important in the locations where GPS signals are either unavailable or disrupted.

Another way to ease the geometrical analysis is to designate the separate lanes for AVs. Since future mobility will be highly reliant on the AVs and many governments are supporting the development of AVs. Therefore, ensuring a separate lane for the AVs or using magnetic strips on the lane boundaries can improve the working of autonomous vehicles.

Neuronal Network

Machine learning is meant for those systems where smartness and autonomy are required. Since the working of the autonomous vehicle is highly dependent on such algorithms, therefore machine learning has great potential here.

Neuronal network or neural network works on the principles of machine learning and tries to replicate the human brain. Here neurons are trained in the same way as humans are trained to learn and make decisions. Data from the perception is fed to the neuronal network, which then trains the datasets and makes necessary decisions as per the fed data.

For example, while commuting to Point B from Point A, an autonomous vehicle stopped at a red traffic signal even when the signal is at the far corner of the road. Such an intelligence operation is done due to the embedding of the neural network. Basically, neurons are trained to look for traffic lights at different height and angle through a camera. The intelligence is embedded when datasets containing images of different traffic signals elevated at different angles, positions, and elevations are gathered and trained. Furthermore, it is also trained for different lights, such as red, yellow, and green light. These neurons or datasets, when gets trained, embeds intelligence into the system. Therefore, whenever a similar situation occurs, they detect, analyse, and act in the same way as humans do. Similarly, road signs, intersecting points, distance from the obstacle or vehicle, and path following are also other key determinates of neural networks in analysing the data from the camera.

Conclusion

The above discussion clearly narrates that the perception model of autonomous vehicles is multi-dimensional. The approach is very novel and is quite close to reality. It fulfils all the pre-requisites to make autonomous vehicles a safer choice. By analysing the adoption of the geometrical or neuronal network, the adoption of the neuronal network seems to be more relevant and effective. The foremost reason lies in the AI and constant learning aspect of the neuronal networks. While it is by-default trained to commute on the roads freely, the continual learning aspect makes it more sustainable. Furthermore, it has the ability to take into account almost every constituent of the perception model. Whether incoming data is from the camera or LiDAR, a neuronal network is effective enough to translate this information and make appropriate decisions.

References

  1. https://eldorado.tu-dortmund.de/handle/2003/38044
  2. https://www.businessinsider.com/teslas-biggest-problem-with-self-driving-cars-is-that-it-has-to-sell-them-2020-2

Autonomous Driving and Recognition of Turn, Stop, and other Traffic Signals using Camera Input โ€“ What are the Challenges?

According to WHO, nearly about 1.35 million people die every year just because of road accidents. These accidents occur mainly because of human error. Humans can drive ruthlessly, sometimes they follow rules, and sometimes they donโ€™t. Whenever they violate traffic rules and regulations, mostly road accidents occur. Traffic rules are enacted just to ensure safe driving. Given the other technological advancements, a rising number of traffic accidents also motivated automobile manufacturers to pursue autonomous vehicles. With multidimensional sensing, analysing, determination, and control systems, autonomous vehicles can travel much safer than that of humans. The basic reason lies in the fact that machines donโ€™t get tired. Furthermore, the installation of intelligent systems also makes sure that autonomous vehicles do not violate the rules. Even though there are clear benefits of autonomous vehicles, many people still doubt about the reliability of autonomous vehicles.

Out of all the grievances about autonomous vehicles, concerns about its recognition system is of great importance. There is a need to shed light on how autonomous vehicleโ€™s recognition system reacts with the multiple road and traffic signs. It is the recognition system that assists autonomous vehicles to act intelligently. Inputs from the camera system and state of the art results of recognition systems are what advocates for the effectiveness and intelligence of autonomous vehicles.

Recognition of Traffic Signals using Camera Input

While the use of radars, LiDARs, GPS, and Ultrasonic sensors in autonomous vehicles assist in the distance and geofencing of the external environment, cameras give the visual perspective of the external environment. The close coordination of all sensing units is the key to the success of autonomous vehicles. Letโ€™s find out how the cameraโ€™s inputs are recognized by the autonomous vehicles from the perspective of road and traffic signals.

In the development stage, the perception and interpretation model of autonomous vehicles is trained for various objects and environments, which are expected to occur while travelling in the real world. This information is stored in the matrix for each object. For example, if it is a turn right scene, then different images such as the image of road signs in the sunny weather, cloudy weather, rainy weather, snowy weather, and dark mode are stored in its respective matrix. The same sorting is done for other road signs and traffic signals. While travelling, the camera of the autonomous vehicle constantly captures the visuals of the surroundings. These captured images are then sent to the interpretation model where they are fine-tuned to filter any noise along with fixing resolution related issues. From there, it is passed to the image processing algorithm.

For autonomous vehicles, image processing has three fundamental stages. The first stage is known as pre-processing, where RGB coloured picture is converted to the Hue-Saturation Value (HSV). The second stage is known as detection, where the transcendental colour threshold method is applied for the initial filtering, along with the scanning to establish the Region of Interest (ROI). The third stage is known as the recognition stage, where image processing will be finalized, and therefore, the fate of the captured will be decided. Commonly, Support Vector Machine (SVM) and Histogram of Oriented Gradients (HOG) are employed to recognize the type of captured image, i.e., whether it is a traffic signal or road sign along with the extraction of information. For example, if it is found that image has been recognized as the road sign calling to STOP, then this word will be extracted as the useful information and passed on to the vehicle control system to begin deceleration or braking system.

Challenges

While methodologies about recognition of camera inputs seem substantially accurate, there are many challenges to this highly intelligent algorithm. With the increase in competition and more research studies on the subject of autonomous vehicles, it has been noted that this recognition system has been facing or about to face many challenges. Some of the vulnerable challenges are as follow:

2D Perspective of Cameras

The world around us is in 3D but the camera gives us a 2D perspective. Though there are a lot of 3D cameras that are launched by many companies, still their efficacy is to be evaluated for autonomous vehicles. Since the camera sees 3D objects from the 2D perspective, there is a great challenge of perspective lag. There is a possibility that this lag may miss some information, and therefore, AVโ€™s cameras could not give a full insight into the external environment. Eventually, leading to less effectiveness of autonomous vehicles.

Globally Varying Road Signs Designs

Generally, road signs all over the world are the same, but there are road signs that are either region specified or are shown differently as compared to the rest of the world. For example, in most of the countries STOP sign is mostly written inside a red-painted octagon. But there are countries like Israel, Ethiopia, and Pakistan where a hand is displayed to indicate stop. Similarly, some of the Arab countries write STOP in Arabic. So, in this case, it is a challenge for engineers to develop such a recognition system that shall be able to recognize all sort of road signs as well as different traffic signals.

Effectiveness of AVs Algorithms

The varying road signs further give rise to another challenge, i.e., vast database and the effectiveness of machine learning. The working of autonomous vehicles is chiefly governed by machine learning algorithms. Therefore, it is a big challenge to first formulate such a vast database of road signs along with their respective multi-dimensional matrices. The formulation of such a database does not solve the problem entirely, but it is also a gross challenge to train autonomous vehicle systems to the degree where they can recognize these signs flawlessly in the real world.

Conclusion

To encourage the use of autonomous vehicles, it is very important to educate people about the efficacy and safety of autonomous vehicles. Once a consumer is sure that autonomous vehicle is intelligent enough to decide on its own with the flawless recognition system, only then dream of transforming future mobility with AVs would be realized. There is no doubt in admiring the intelligent recognition model of camera inputs. AVs are capable enough to recognize far-flung road signs and highly elevated road signs with the supersonic processing speed and milliseconds actuation. Since continued learning is the prerequisite of excellence, therefore the same should be applied here. The future of autonomous vehicles is closely associated with confronting the upcoming challenges.

References

  1. https://eldorado.tu-dortmund.de/handle/2003/38044
  2. https://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries
  3. https://www.frontsigns.com/blog/the-difference-of-world-traffic-signs

Software Architecture in Autonomous Vehicles and its Safety Concerns

Autonomous vehicles may seem to be a very noble concept for future mobility. But, in its essence, it involves a range of complexities. If one cast a critical look at the architecture and working principle of autonomous vehicles, nothing except complexity and artificial intelligence would be found. The general public may consider obstacle avoidance and path planning as just the advanced version of conventional vehicles. But those who are familiar with autonomous vehicles, they know that all this autonomy and intelligence comes at the cost of complexity. In its typical operations, the autonomous vehicle has to deal with the constant data feeding by the perception model, rigorous processing of information, and the intelligent decision making to forward commands to the vehicle control system.

The point to ponder directs to the question that what is the one thing that ensures coordination of such complex systems, and how is it ensured on such a vast scale? The simplest answer to it is Software Architecture. The main purpose to introduce software architectures in autonomous vehicles is to manage the complexities in the autonomous driving systems. It helps in the assessment of both the non-functional and functional attributes of the autonomous driving systems.

To have a better insight into software architectures, there is a need to look at what they offer and how can they help mitigate the complexity and safety-related issues of autonomous vehicles.

Software Architecture in Autonomous Vehicles

While the use of the software architecture in autonomous vehicles varies from company to company but for the sake of the overview, general considerations about AV software architecture will be discussed.

The software architecture in autonomous vehicles is recommended to be a sort of layered one. The foremost benefit of layered architecture resides in different layers where individual systems of the autonomous vehicle are mapped. These layers are responsible for fulfilling their respective responsibilities. By using this architecture, the division of different responsibilities is formed in the system, which consequently helps in better understanding and troubleshooting of the autonomous vehicle. Furthermore, such layers make the system more manageable and counter the systemโ€™s complexity. The general layer architecture can comprise of the input normalization layer, action planning layer, control layer, and output layer. Depending on the systemโ€™s constraints, these layers can further be divided into the sublayers. The general working mechanism of autonomous vehicleโ€™s layered software architecture is as follow:

  1. Perceptions from the sensors and visual equipment such as the camera are passed on to the input normalization layer. This layer perceives this information and prepares it to be passed onto the action planning layer.
  2. The action planning layer receives the normalized inputs and begins semantic processing of the received information, such as acknowledgment of drivable lane. Now, based on the results of semantic processing, the action planning layer activates the path planning algorithm, which then determines the direction in which autonomous vehicle should head. The final output of the action planning layer, i.e., commands to decide the manoeuvre, is passed onto the control layer.
  3. The control layer decides that which commands shall be executed by the autonomous vehicleโ€™s software, such as whether to active the acceleration algorithm or to trigger the turn left algorithm. The information about the selected command or algorithm is then passed to the output layer.
  4. The output layer is where the action part is involved. The output layer is responsible for the actuation of the system or in other words, hardware abstraction. For example, based on the outcomes of the input, action planning, and control layer, if it is decided that the steering system should be actuated, then based on this information, the output layer shall actuate steering as per defined steering angle and direction.

Given the autonomous vehicle models and features, it is possible to include the data logging and network layer too. While the network layer will assist in the communication and UI/UX interface, the data logging layer will help to maintain the logs of the itinerary and the overall operations of the autonomous vehicle.

Safety Considerations

Apart from focusing on the software architectures of the autonomous vehicles, it would be disastrous to ignore the safety considerations. When it comes to autonomous vehicles, reliability and safety are the two most sought features. There are multiple ways by which both safety and reliability can be ensured in autonomous vehicles.

Emergency Brake System

The emergency brake system should be made an essential part of the software architecture of autonomous vehicles. Given the importance of the emergency brake system, a separate layer can be defined in the software architecture of autonomous vehicles so that when triggered, it does not suffer any time lag.

Self-Adaptation

Self-adaptation is another way by which the safety of the system can be ensured. Such mechanisms are capable of adapting to the required changes with the help of a feedback loop. With a feedback loop, the current status of autonomous vehicles can be incorporated in the ongoing computation so that computations related to steering angle and speed should be made more authentic and safer.

Timeout Mechanism

Besides both of these key safety considerations, a timeout mechanism can also be inducted in the action planning layer. This consideration proves to be helpful in the situation when no new image or data is being fed from the sensing environment and the vehicle needs to react reasonably.

Conclusion

The way complex systems operate in autonomous vehicles, the inclusion of software architectures is very much needed. With the use of layered architecture, not only do jobs have been segmented, but the systemโ€™s reliability is also expected to increase a lot. Each layer is responsible to do its job and then pass it on to the next layer. Another advantage of this layered architecture is the increase in processing power. From information processing to the control system, each segment will only do its designated job, thereby relieving them from the unnecessary burden. Along with all these considerations, safety should be prioritized. The acceptance of autonomous vehicles in the eyes of the general public is associated with the intense safety and reliability of autonomous vehicles.

Reference

  1. https://eldorado.tu-dortmund.de/handle/2003/38044
  2. https://www.atlantis-press.com/journals/jase/125934832/

How can the Signal from Perception be sent to the Vehicle’s Control? How can it be Interpreted?

As a general look, it seems very easy for autonomous vehicles to collect information from the sensors and act accordingly. But, in actual, perception is the most challenging as well as a complex phase to model. The reason lies in the simultaneous working of multiple sensors and integrating them to establish a single cruck of information. Depending on the environment in which autonomous vehicle is being driven, the volume of the data that needs to be processed can be in huge volumes. Generally, a massive amount of data is generated by autonomous vehicles in a fraction of seconds. This narrates the horrific amount of processing power that is required to process such a large volume of data. It is another challenge, but for the time being, let’s stick to the interpretation of perception signals and their influence on the control of the autonomous vehicle. To understand the mode of translating perception signal to steer the control system of the autonomous vehicle, it is empirical to understand how perception signals are interpreted.

How does Interpretation work in Autonomous Vehicle?

From the autonomous vehicleโ€™s perspective, the perception model is the combination of different sensors. These sensors sense the external environment and pass on this information to the translation block or model known as interpretation. Since the obtained data can be in the scattered form, the interpretation block not only processes this information but also fine-tunes this data to be more discrete and understandable.

For example, consider a situation where an autonomous vehicleโ€™s perception has detected that obstacle is just 10m ahead and collision is possible. So based on this information, an emergency brake system will be triggered, and the vehicle will eventually stop to avoid the collision. Now the discrete information about the distance to be just 10 meters wasnโ€™t conveyed by the sensor or perception model. Instead, the sensor used their sensing system to detect the obstacle and passed on this information to the interpreter. The interpreter then filtered the data and processed it using the speed time formula to compute the distance of 10 meters. This is how interpretation works in autonomous vehicles.

However, depending on the features and working model of the different autonomous vehicles, information processing and algorithms may differ. But the general essence of the interpretation model remains the same.

Process of Sending a Signal from the Perception to the Vehicle Control

It is said that it takes about 0.15 to few seconds for humans to decide and act in case of any road calamity. This time accumulates to the situation analysis, deciding the course of action, and then taking actions to control the unfavourable circumstances. However, critics say that it is not the same in every case, and generally, it may extend to several seconds. So, in this lieu, it wonโ€™t be wrong to say that time it takes to generate and interpret the perceptions as well as the time to let vehicle control act on those interpretations should be close to this perception-reaction time.

When the perception model detects something, it sends the collected data to the interpreter model. Here this information is first of all filtered and fine-tuned for better processing. Afterwards, this discrete data is matched with the underlying conditions. Here it is checked that does this data correspond to any of the triggering values, say violation of safe distance from the objects. If it finds out that received information is demanding for some contingency action, then this information is translated into the format where defined algorithms and processes can understand it. After that, the relevant algorithm is called, and the corresponding control mechanisms are triggered to perform their designated operations. Now, this is the overview of the process of sending perception signal to the vehicle control. Letโ€™s understand it in an interactive way by referring to a specific scenario.

Consider a situation where an autonomous vehicle is on its way to reaching the Point X. While commuting on the highway, it constantly takes perceptions from the number of sensors such as cameras and radars. A road sign was detected, calling to limit the speed to 90 km/hr. Now, it will take capture and locate this road sign on the camera and sends it to the interpreter. Here, with the help of image processing algorithms, the image will be refined and tuned to extract that 90 km is written and the detected object resembles the road sign. Now a question arises that how come interpreters will recognize it to be a road sign? It could be some sticker on some moving car. The answer lies in the intelligence of autonomous vehicles. With the use of neural networks, they are trained to distinguish amongst different road signs. The detected image when gets processed is passed on to the speed control algorithm. This speed control algorithm will activate the vehicle control system to trigger deceleration. Within no time, deceleration will continue until and unless the vehicle reaches the speed of 90 km/hr. From now, until the next perception, speed will be maintained at 90 km/hr.

All this systematic process seems to cause a lot of time delay. But, in reality, with the help of fast processors and intelligent control strategies, the process of sending perception signals to the triggering vehicle control system is extremely fast.

Conclusion

There are a lot of myths associated with the working mechanism of autonomous vehicles. To some extent establishment of these myths make some sense, as people are generally reluctant to adopt something which they have never heard. In the case of autonomous vehicles, the invention is so rich in its essence that people get astonished. For them, such inventions make sense in science fiction movies. But when it comes to realizing such concepts, they canโ€™t believe. Taking the example of the explained process of perception building to controlling vehicles, it is very easy for people to comprehend the idea of autonomous vehicles. The involvement of high-tech algorithms and high-end perception models can surely make them believe the efficiency of autonomous vehicles.

References

  1. https://eldorado.tu-dortmund.de/handle/2003/38044
  2. https://blogs.nvidia.com/blog/2018/08/10/autonomous-vehicles-perception-layer/
  3. https://www.visualexpert.com/Resources/reactiontime.html

What are the Main Building Blocks to Build an Autonomous Vehicle?

For the past few years, autonomous vehicles have received an unprecedented welcome. Every automobile manufacturer is exploring ways to get in the business of autonomous vehicles as per their own capacity. While some people may consider the term โ€œautonomous vehiclesโ€ to be the latest, but for technology enthusiasts, it is something that has been in the research phase for decades. In this lieu, the first practical approach was taken by the researchers of Stanford University by building a Stanford cart in 1961. This first autonomous cart like vehicle was capable of navigating through certain obstacles by way of the cameras and early developed AI algorithms. But it used to take about 10 to 15 minutes for the cart to move 1 meter.

Now you may be wondering about the time it takes to move 1 meter. But, in todayโ€™s reality, a lot of work is being done on the technological front. Therefore, it seems very absurd in todayโ€™s time to wait for such a long span to take a lap of just 1 meter. As compared to the past, the AI algorithms have improved a lot, design tools are in abundance, and research methods have been enriched with a lot of utilities. All these improvements pave the way for better autonomous vehicles. This is why, technological giants like Tesla, Uber, Google, Ford, and General Motors are in fierce competition to build effective and safe autonomous vehicles.

Building Blocks of an Autonomous Vehicle

There are a lot of technologies that serve the purpose of automation in different sectors. For example, programming brings automation in the software, while Programmable Logic Controller (PLC) brings automation in industrial machines. But since the automotive sector is a completely different regime and autonomous vehicles take into account different technologies. Therefore, it is better to say that the building of autonomous vehicles involves cross-collaboration of different building blocks having their own respective methodologies and working mechanisms. Following is the detail introduction to all those basic building blocks that are mandatory to build an autonomous vehicle:

Perception

Perception in the world of autonomous vehicles is not the same as we perceive in the literal world. The concept of perception here uses the rich combination of high-tech cameras and sensors to get real-time data of the objects. As the first line of force, reliable and around the clock operations of perception is very crucial in the autonomous vehicles. The reason lies in its importance in the core decision making pertaining to different autonomous functions that an autonomous vehicle is supposed to perform. There are multiple perception elements, such as Radars, LiDARs, and a combination of certain cameras. Now one may question that since getting data is the main goal of perception and these requirements are very well being fulfilled by LiDARs and Radars, then why are cameras used? Basically, to get the in-depth information combinations of cameras, radars, and LiDARs, they are used in a process known as Sensor Fusion to not only label the objects but to confirm them too. For example, it is highly possible that radar may identify a body that is in front of the autonomous car and is moving with X velocity in the Y direction. But the camera will confirm whether this moving object is a car, cyclist, a pedestrian. This confirmation is very important from the perspective of autonomous vehicles, as the decisions are highly dependent on the type and nature of moving objects.

Interpretation

While perception is a sort of sensory component of the autonomous vehicle that senses the external parameters, interpretation is related to the translation of that sensing information into a sort of interpretive information. This is where algorithms like line following and obstacle detection are formulated. Interpretation is the software-based building block, and it is where intelligence will be embedded into the autonomous driving system. For example, when the perception stage has sensed an obstacle, then this gathered information is given to the interpreter. The interpreter will interpret this information and activates the given algorithm, which in this case, will be an obstacle avoidance algorithm.

Steering Model

This block or phase of an autonomous vehicle is a sort of an action phase. It is where practical demonstration of perception got from the sensors and the interpreted information comes into play. The steering model in the autonomous vehicle determines the angle at which the vehicle will steer left or right or maintain its position on the road. The basic driving force is the path following algorithm where steering is triggered at a specific angle to maintain position or move as per the defined settings. The steering is triggered with the help of a servo motor, which transmits the steering force via belt and pulley.

For example, imagine the sensing system senses the left turn sign on the road. It passes on this information to the interpretation that interprets the fed information in the form of the language the steering model understands. In accordance with the fed information, steering will be triggered at a certain angle to perform the left-turn action.

Control System

The control system is considered the heart of the autonomous vehicle. While the other three blocks build the platform, the control system makes sure that an autonomous vehicle gets driven on these platforms. Since constant feedback is being fed to the system to drive the vehicle up to the optimal conditions, a closed-loop control system is installed in the autonomous vehicles. Major components of the autonomous vehicleโ€™s control system are acceleration, deceleration, and emergency braking systems. To ensure the safe and flawless driving of the autonomous vehicle, information such as deviation from the path, distance from the destination, obstacle, type of road, and any turning requirement are constantly being fed to the system, which then controls and operates the vehicle in an autonomous manner.

Conclusion

From the general perspective, autonomous vehicles may seem just an advanced version of todayโ€™s cars. But in reality, several complex processes not only make them better than todayโ€™s cars, but they also open a whole new paradigm of mobility. With the algorithms like path following, obstacle avoidance, and sheer adherence to road signs, road accidents can also be avoided. About 90 people die every day in the USA just because of road accidents. Human error is the main reason for road accidents. These 90 lives can be saved daily by adopting an intelligent and smart mobility mode, i.e., Autonomous Vehicles. With such a highly sophisticated and intelligent process of perception, interpretation, smart steering, and control system, we can stay assured that human errors can totally be negated. Because unlike humans, machines do not yawn and donโ€™t get tired rigorously.

References

  1. https://eldorado.tu-dortmund.de/handle/2003/38044
  2. https://www.wired.com/story/guide-self-driving-cars/
  3. https://www.sweeneymerrigan.com/car-accident-statistics-in-the-united-states/

Steering Model and Control โ€“ What is the Theory & What are the Challenges?

Autonomous vehicles are becoming a hot topic with every passing day. From the safety to their control mechanisms, everything is in fierce debate. Several articles and research studies have been conducted in favour and opposition to autonomous vehicles. Apart from all these discussions, it is interesting to see different articles saying that autonomous vehicles are surely going to take on the world. According to the predictions of IHS Automotive, there will be 21 million autonomous vehicles on the road by the year 2035. The figure of 21 million advocates for the viability of autonomous vehicles. It seems as those opposing and challenging this technology shall soon settle down to rest. The figure of 21 million also narrates that the future autonomous vehicles will be intelligent and safe enough to attract car buyers to drive AVs.

Nevertheless, of all the advocacies and oppositions, one thing that has emerged as the potential clarification point is the steering control system for autonomous vehicles. It is the most crucial and important part of autonomous vehicles. If a sound explanation of the steering model and control is provided, then it will be much easier for the general public to understand its highly dynamic and intelligent mode of operations. Letโ€™s find out how do the steering model and control work? Apart from its practical demonstrations, what shape does it have on the theoretical front?

Theory of Steering Model and Control

Most of todayโ€™s cars are equipped with the Motor-Driven Power Steering (MDPS). This system reduces the driverโ€™s effort, as it eases the torque with the help of an electric motor. But autonomous vehicles donโ€™t have any drivers and do not need to ease such efforts. So, this system is not recommended to be used in autonomous vehicles. For the autonomous vehicle, such a system is required that does not only actuate the steering control but should also compensate for the error. In simpler words, a steering controller along with some feedback mechanism is required.

Before getting into the detailed theory of the steering model and control, lets first have a brief look over the hardware and the basic steering model. The steering model of an autonomous vehicle is equipped with an actuator that transmits the torque to the steering using pulley and belts. It also has a potentiometer to determine the position of the motor shaft. This assembly of actuators and sensors communicate with each other with the help of RS232 and Controller Area Network (CAN). The CAN bus is used to read the position and speed of the motor. The same route is used to write the desired Pulse Width Modulation (PWM) signal so as to actuate steering accordingly.

Since we are done with establishing the basic model of the steering model, now letโ€™s discuss the control strategy in detail. The steering control system is basically a path tracking system that controls the vehicleโ€™s steering based on its current position and degree of deviation from the reference path.

A path tracker system can be called as the brain of the steering control and it is what locates, analyses, determines, and actuates the steering. This system works with three basic modules namely; velocity planning module, look-ahead distance module, and the path tracking module. The velocity module is responsible for planning the vehicleโ€™s velocity with the help of a pathโ€™s curvature, side friction factor, and super-elevation. The look-ahead distance module assesses the look-ahead distance based on the vehicleโ€™s velocity. Both of these modules set the desired point in conjunction with the reference point. Based on this, the path tracking module generates the steering angle, which is then actuated with the help of motors, connected with the vehicleโ€™s steering model.

The whole control system is modelled and designed on the basis of a closed-loop feedback control system and has a steering controller, which with the help of the feedback loop, makes sure that the actuator follows the reference point.

Challenges

Since nothing in this world can reach ultimate perfection, therefore the same exists with the steering model and control of autonomous vehicles. Though, technology has reached a point where autonomous vehicles are no more a dream. But some challenges are still associated with the steering model and control.

The foremost challenge is to convince people about the safety of autonomous vehicles. Specifically, if we talk about the steering model and control, then the fundamental challenge is the unavailability of the contingency plan. The fundamental question is what will happen if such a sophisticated and advanced control system fails? This vulnerability becomes even more concerning in the case of human-less driving scenarios. So, the main challenge is the inclusion of redundancy.

The second big challenge is more of a design-related issue. But still, it is rich enough to be counted as the challenge to the steering model and control. Since AVs do not require any kind of human intervention, so designers and researchers are confused about whether to include the steering wheel in the future autonomous vehicles or not. This challenge is also gross from the technical point. This challenge poses a very legitimate question that even if steering wheels are removed, what design and operational consideration would be made in regard to steering model and control as well as the overall design of the AV?

Conclusion

The enrichment of artificial intelligence and machine learning algorithms is considered to be the founding stone of autonomous vehicles. It is true from the technological perspective but what if algorithms are developed and the actuators or controllers which are going to realize them, arenโ€™t mature? This is a very concerning issue for both autonomous vehicle manufactures and enthusiasts. With the proper introduction to the theory of steering model and control, it is much easier to advocate for autonomous vehicles. The proper understanding of steering model and control enables AV supporters to convince people about the efficacy of autonomous vehicles. Likewise, challenges prepare manufacturers and designers to improve the steering model and control so as to maximize its safety. Out of all, it is now a very established fact that redundancy in the steering control must be embarked on autonomous vehicles.

Reference

  1. https://eldorado.tu-dortmund.de/handle/2003/38044
  2. https://www.automotive-iq.com/steering/articles/autonomous-driving-steering-concepts-self-driving-cars

How to Construct a Simulation Scene for Testing Autonomous Driving System in a Virtual Environment?

Developing an autonomous driving system is one thing, and testing its efficacy is another thing. It is highly possible that researchers may have made un-realistic or ideal assumptions while designing an autonomous driving system. Such designed systems may look very promising on the paper. But since ground realities could be a bit harsh, it is possible that desired results could not be achieved. But the question arises that how can such developed systems be checked and verified? The classic approach says to run a trial test. However, given the implications associated with the autonomous driving systems, trial tests in real-time could prove to be fatal. If algorithms are not efficient and reality-based, what if it hit a pedestrian? What if it violates the traffic signals and could get in a road accident? There is a lot of risks associated with such trial tests.

Continue reading

What are the Requirements of the Autonomous Driving System?

From the invention of the first steam engine to the regime of self-driving cars, the automobile industry has experienced an unprecedented transformation and progress. Now the focus of automobile manufacturers is on the development of autonomous driving systems rather than improving the designs of existing models. It is due to such rigorous research and development ventures that the autonomous driving system has covered the distance of going from the โ€œmaybe possibleโ€ to the โ€œdefinitely possibleโ€. There are estimates that driverless technology will contribute an additional $7 billion into the global economy. Furthermore, if companies like Tesla, Google, Nissan, General Motors, and Ford are pouring billions in the R&D of such systems, then undoubtedly, there is a lot of potential in this sector.

Continue reading