Thursday, February 21, 2019

Detection of Sign Board in Traffic for Vision-Based Driver’s Assistance


The vision-based driver’s assistance systems have received considerable attention in the past two decades since many car accidents are as a result of the drivers’ fatigue or lack of awareness. The detection of a traffic sign on the road is very useful in improving the driving thereby helping to reduce the accidents that have been rampant. Thus, the major purpose of the vision-based traffic sign detection systems is to detect the yellow and the red signs from the vehicles that are ahead of the use of computer vision technologies and then making a stop of slow down where necessary.  In the intelligent vehicles, the driver assistant system is designed to assist the drivers to perceive the any dangerous situations early enough to avoid accidents because the system senses the danger and warns the driver by reading the traffic signs.


In this paper, the researcher aims at developing an algorithm that can detect traffic signs based on the color and shape. This algorithm will make use of images taken by a camera that will be mounted in front of the moving car. There will be a testing of two types of traffic signs including the yellow warning signs and the red stop signs after which the results will be summarized. The study concludes that that color-based detection can be easily illuminated whereas the shape detection is based on the complexity of the background.  It is anticipated that this research will help in addressing the problem of accidents on the roads as it also helps to make the best use of technology to improve the transport sector. The study will be useful not only to the drivers but also to all the other stakeholders such as the travelers and the vehicle owners.



CHAPTER 1

INTRODUCTION

1.1. Introduction

In the past three decades of so, autonomous vehicles have been a subject area of intense research.  The state-of-the-art research leverages complex techniques in computer vision for detection of a traffic sign, which has been an active area attracting the attention of several researchers.  Many research works have been conducted utilizing a front viewing camera whose purpose is for vehicle localization and navigation, obstacle avoidance as well as environment mapping.    The on-road applications of these vision detection systems have incorporated lane detection, the occupant pose inference, as well as driver distraction detection.  Some researchers have described that it is imperative to consider not only the vehicle’s surrounding, and the external environment when designing the vehicle assistance systems but the internal environment and the driver should also be taken into account (Trivedi, Gandhi, & McCall, 2007; Tran & Trivedi, 2012).
The focus on other types of information while designing the sign detector can make the whole system even better (Morris & Trivedi, 2010).  Every traffic detection system is aimed at achieving four main goals including. The first one is that the algorithm should be accurate.  Morris and Trivedi have said that accuracy is the most basic requirement and the most important evaluation metric for the system under study.  When the system is considered to be a distributed system that considers the driver as an integral part, it will enable the drivers to contribute what they are good at while the traffic sign recognition part will present the information based on the signs detected.  Furthermore, the other surrounding sensors can also influence what is being presented.  That coordination will help to ensure that the driving takes place in the most careful manner thereby improving the road safety.
The ability to achieve accuracy in various environmental conditions leads to the second goal of a traffic detection system, which is robustness.  Because it is difficult to predict the types of conditions the car will encounter accurately, this new system should have the capability of achieving desirable results irrespective of the environmental conditions in which the vehicle is. That means that this algorithm should achieve integrity and consistency under nominal condition.   The driver assistance traffic detection system should also have to be fast.  For the system to work as desired and help to achieve the goal as to why it was designed, it should work in real-time.  Also, the detection of traffic signs is just a small task in the whole autonomous system network.  Also, in the case of autonomous vehicles, they move at high speed and mainly require a reaction towards the traffic signs within only a few seconds.
Lastly, the fourth evaluation metric of a vision-based driver assistance algorithm is the cost. There are various sensors requires including GPS, radar, Lidar, and inertial sensors, to achieve the desired system for the car in question.  All these require a lot of spending, but the designers should make sure that too much expense should not go to the same.  The industry also should achieve a relatively high detection rate with the use of low-cost cameras, even though the camera is not the most expensive item in our case.   It is prudent that when the various subsystems and sensors are being purchased the cost be put into consideration since for any investment, the goal is to minimize the costs and maximize the benefits. Therefore, this useful metric will be important in evaluating the success of the system under development.


1.2. Statement of Purpose

A vision-based driver assistant system can detect the road signs based on their color and shape. These two areas, the shape, and the color, are crucial in implementing a driver assistant system that can be effective and reduce the road accidents even in an autonomous vehicle.  Many systems have implemented the vision-based driver assistant systems to help in detecting the road signs, and hence the purpose of this paper is to design and implement a vision-based driver assistant system that can detect the signboards. For instance, at a junction, it will be easy for a car to know the right direction that it will take to ensure that it gets to the desired destination.

1.3. Statement and Importance of Problem   

In the recent years, the vision-based detection systems for assisting drivers have been incorporated in the top-of-the-line models by various manufacturers. However, a more effective sign detection solution has yet to materialize. While many such systems involve the detection of the normal road signs like the traffic lights from other vehicles, the marks on the roads, and identifying the distance between one car and the other so that an accident can be avoided, they have not effectively included the issue of the signboards.  A more general vision-based sign detection systems for driver assistant ought to incorporate every aspect of the road to make sure that the system not only helps to avoid the road accidents, but it also helps in making the right judgment for the direction to move particularly in the case where there are many diversions.
The computational power and the flexibility of the vision-based driver assistant systems have recently increased due to the development of system-on-chip technologies as well as the advanced computing power of new multimedia devices.  It is thus crucial to take advantage of the advancement of technology to design powerful systems that can help to make life easier than in the past. The issue of introducing the driver assistant technologies onto the roads has been slow due to the lack of a wide coverage of all the areas that should be covered.  This research will thus add a fundamental ingredient to the solution of vision-based driver assistant systems. The introduction of the signboard detection into this big picture will present a crucial success into the vision-based driver assistant systems. When a driver approaches a junction, where there are many directions to take, signboard detection system will be a very important facet to determine the right direction to take.

1.4. Research Objective

The core purpose of this dissertation is to come up with an algorithm that detects the signboards on the road and embed that in a vision-base driver assistant system.  It will also ensure that driver assistant vehicles will receive an alert in real-time for any signboard detected on the road and then make a decision on which road or direction to take to arrive at the desired destination.

1.5. Experimental Approach

In this paper, there will be a design of architecture for the vision-based driver assistant system that is based on an image processing technology. A camera will be mounted on the car’s windshield to determine the roads’ layout as well as the vehicle’s position on the lane and then detect the signboard to determine the lane the vehicle should take.  The resulting image sequence is then analyzed and then processed by this system that will then automatically determine and report the possible directions for the vehicle to take.

1.6. Significance of the Study

One of the unique abilities of this study is that it offers an affordable solution to the vision detection driver assistant system with the use of the available image processing technology. This approach will especially benefit the designers of driver assistant system and the ongoing projects of autonomous vehicles. This new system also provides real-time information based on the signboard images processed; it is cost effective, accurate and consistent as compared to the other expensive Matlab algorithms. The system is a great improvement to the vision-based driver assistance systems since brings novel ideas especially in the area of reading the information on the signboards instead of just reading the traffic signs like in many systems. It forms a vital foundation on which future systems can be built to ensure that the entire road environment is covered to help the driver with all the necessary information to make right decisions and avoid accidents.

1.7. Limitations

a.       The new system only concentrates on the detection of signboards and not the other road signs like the vehicles in front and behind. It also does not include such signs as the warning lights or even a closing pedestrian.
b.      The system can only detect the signboards that are not obscured and at a closer distance, not the ones that are raised too high.
c.       In case the signboard is too close, the system may not help the vehicle to make a quick decision on where to turn, especially if the vehicle was moving at high speed.
d.      The system will need to be enhanced so that it can be used in the autonomous vehicles.
e.       The system can also have difficulties in perceiving the signboards or other objects during the night as well as in extreme weather conditions.
f.       The system cannot warn the driver of other dangers on the road beside the information on the distance and the signboards that are ahead.


CHAPTER II

 REVIEW OF LITERATURE

Previous studies on vision-based driver assistance systems have attempted to identify other vehicles, traffic signs, obstacles, and pedestrians in on-road traffic scenes by capturing image sequences with the use of image processing as well as pattern recognition techniques. The adoption of various concepts and definitions of the objects of interest on the road, the application of various techniques that can capture image sequences is useful in detecting obstacles and other signs on the road. Many previous studies have focused on detecting objects and searching for specific patterns on the detected images.  Many techniques have been used in detecting, classification, and interpretation of the data collected on the images. Many different studies use different techniques in all these stages of image processing, and these various studies are analyzed in this chapter of the dissertation.
Vision-based driver assistant systems have the capability of detecting road signs based on their shape and color. That is why in literature there are majorly two methods to solve the issue of traffic sign recognition, and these include segmentation, with the use of color information, or the analysis of the edges of grey-scale images obtained from the camera.  Many sign recognition systems in the past have been focused on these two systems.  Those for sign detection systems have been based on the normal traffic signs like the one for a corner, bumps, zebra crossing, stiff cliff among other road signs besides the road. Viola and Jose (2001) developed a vision based detector based on machine-learning that uses an attentional cascade consisting of boosted Haar-like classifiers.
Viola and Jose’s visual sign detection framework is capable of processing sign and objects images extremely quickly while ensuring a high detection rate. The first key contribution in their system is the introduction of an image representation known as “Integral Image” that allows the computation of features utilized by their detector to be done very quickly. The other key contribution is a learning algorithm whose basis is AdaBoost that selects a few crucial visual features and provides extremely efficient classifiers. The other vital contribution is a technique for combining the classifiers in a “cascade” to allow for the image’s background regions to be rapidly discarded so that more computation can be done on promising object-like regions. This system provides face detection performance that can be compared to the best vision-based sign detection driver assistant systems.  However, this system usually concentrates on detecting the signs that are in the form of objects and do not contain letters or numbers.
Another work in the vision-based sign detection for driver assistance is the work of Huang and his colleagues (2002) who came up with a Gaussian filter that is a peak-finding procedure as well as a line-segment grouping procedure that is capable of detecting lane marks.  The vehicle detection is then achieved by leveraging the feature of undersides, symmetry properties, and vertical edges. Sun, Bebis, and Miller (2002) also proposed a car feature extraction as well as a classification technique that uses a support vector machine in combination with a Gabor filter.  These authors used a generic algorithm for the purpose of optimizing banks while clustering was also used to find the filters that contain similar parameters and deletes redundant filters.  The concern with these two systems is that the data set collection and the iterative training of the classifiers are complex even if they are performed in advance.
The other work that uses edge-analysis with a grey-scale image is that of Gavrila (1999) where he uses a template-based correlation technique to detect potential traffic signs embedded in images. The technique involves the ‘distance transforms’ method whereby you start with an edge image and then perform a matching with the template of the searched signs.  It organizes the templates hierarchically to reduce the number of possible operations.  The problem with this method is that it entails a high computational cost to come up with a real-time system.  Another remarkable work is that of Barnes and Zelinsky (2004) where they used the Hough Transform variation. Loy and Zelinsky (2003) based this system on an earlier system, which is a quick method for detecting points of interest with the use of a system comprising of radial symmetry.  The information used is of a magnitude, and the phase of the gradient read from the grey-scale edge-image. 
Even though Loy and Zelinsky’s method cannot detect only the circular signs, it was improved to detect even rectangular, square, and even octagonal signs in Loy and Barnes (2004). A self-organizing map can be leveraged to enable the extraction of contours and recognize the shapes of different traffic signs.  The histograms of oriented gradient have also been used to filter the road signs from other signposts on the roadsides. It is a useful system that forms that basis for this research because a system is required that can differentiate the other objects that are not the targeted regions of interest. The researcher will also extract the knowledge on contour extraction from this system to help in extracting the contours of the objects detected. The limitation of the system above is that it may not work effectively in extreme weather conditions and also during the night. 
The resulting image from the detected sign has to be analyzed using a classifier to determine whether the detected candidate regions are real traffic signs or not.  This stage is known as classification.  The most commonly used tools to achieve that area the neural networks in their various topologies (Garvrila 1999; de la Escalera et al., 2004; & Broggi, et al., 2007). A normalized image of the potential traffic sign is used as an input vector. Even though the neural networks comprise the key tool used in this stage, it is not the only one that can be used. The template-matching methods can also be used. In the template matching technique, a normalized cross-correlation between the possible traffic signs and the templates stored in a database is used. Also, in García-Garrido et al. (2012), a new approach that uses a support vector machine with a Gaussian kernel is leveraged to accomplish the classification stage.
Buciu, Gacsadi, and Grava (2010) also propose a system that is capable of monitoring the state of the driver. The researcher says that drowsy driving is a considerable problem that can result in thousands of automotive accidents each year. The researcher gives the statistics of France where about 30 percent of car crashes occur every year, and they are responsible for about one-third of the fatal crashes that occur on the French highways. The driver state monitoring is something that started about 37 years ago, and they are still very active. These driver’s state monitoring systems should have the ability to detect drowsiness via the vehicle’s behavior, detect the drowsiness of the driver via the driver’s physical behavior, or via the driver’s physiological behavior. The research in this paper can draw ideas in the future on how to incorporate machine learning into the vision-based driver assistance system and come up with a system that is more robust and intelligent upon this useful research. The only problem is how to be able to detect the physiological behavior or the driver, but it is a feasible system.


CHAPTER III

 METHODOLOGY

3.1 Introduction

The development of a vision-based driver assistance system is imperative in the context of the road conditions.  This study aims at developing a system that can detect signboards on the road and alert the driver promptly so that he/she can make the correct choice on the direction he/she should take to arrive at where he/she wants to go. Such a system aids the driver in making the correct turns based on the destination that the driver wants to take. That means that the system should be accompanied with a pre-determined data on the route and destination of the vehicle.  This project will follow some steps that are aimed at ensuring that a complete system is designed as anticipated.  The road signposts are usually confusing, and the new road users find it problematic to make the correct turns especially where there are more than two possible routes to take from a given point.
The proposed vision-based system integrates an effective vision-based as well as processing modules such as signboard detection, event recording functionalities, and collision warning features because to avoid colliding with another vehicle when negotiating a corner or when changing the lane to enter the required one.  These functioned will be implemented to identify the target signboards, determine the vehicles in front of the car, estimate their distances, and the record the traffic event videos.  In the following subsections, these processing modules are described in detail.

3.1.1 Segmentation.

The essence of this segmentation step is to have a rough idea concerning the signs that might be on the signboards thereby narrowing the search space for the subsequent steps.  A unique approach has been proposed: leveraging a biologically based attention system, it gives a heat map that shows the areas on a signboard or signpost where the sign is likely to be found.  These input images, in this case, have to be capture from the camera, also referred to as the vision system.  Those sensed frames are the road environments that appear on the camera that attached in front of the host car.  The task of this object segmentation section is to extract an object from the road environment to facilitate the rule-based analysis of the object.  To reduce the costs of computation of extracting the objects, this module first has to extract a grayscale image to determine if it is just an object or if it is signboard.
To extract those images on the road from a given gray intensity image, pixels of objects have to be separated from the other object pixels that have different illuminations. Since most of the signboards that are above the road are green, it will be easy first to separate the images on the vision system based on color. Therefore, it is imperative for an effective thresholding technique that can automatically determine the suitable number of thresholds for segmenting the object revisions from the detected image.  An effective multilevel thresholding technique has been proposed in this paper to ensure fast region segmentation.  This technique automatically decomposes captured road scene images to produce homogenous threshold images using a discriminate analysis concept.

3.1.2 Spatial analysis and clustering.

To identify the potential signboard-light components after completing the object segmentation, a component extraction process is performed on the object plane to locate the linked components of the objects. The process tries to identify the rectangular shaped green object above the virtual horizon of the y-axis.  This virtual horizon is at a distance of above 150 centimeters above the road surface since many signboards are placed at this distance above the road surface.  Thus, a clustering process has to be used to the components to cluster them into many meaningful groups.  This group may consist of traffic lights, road signs, as well as other illuminate objects that have been raised above the ground.  The image identification process then processes those groups to identify the actual signboards.
To preliminarily screen out the non-signboard objects like the street lamps or traffic lights, the objects that appear below one-third of the virtual Y-axis horizon are filtered out (that is, only the components located above the constraint line).  That is because this study assumes that the signboards are placed above the road surface.
Also, to determine the direction that the sign on the signboard is pointing to, it is vital to identify the components at the head side and the tail site before carrying out the response analyses. The usefully distinguishable feature of the tail and head sides of a road sign is the arrow that automatically represents the forward direction. When there is an object that is close to the camera-assisted car, there are blooming effects in the CCD cameras that may hinder the camera from focusing on the signboards above.  Thus, the vehicle should be at the required distance from any obstruction, especially a tall vehicle containing some goods that are likely to obscure the camera. After the identification, the objects are then merged and clustered into rectangular green component groups if they have signs denoting an arrow on them, are close to one another, and are aligned.

3.1.3 Sign tracking and identification phase.

These techniques obtain the image groups of the potential signboards in each captured frame.  However, because adequate features of some potential signboards may not be immediately captured from the single image frames, there is a need for a tracking procedure that can analyze the information of the possible signboards from successive image frames. The tracking information is then used to refine the detection results and suppressing the errors that may have been caused during the object segmentation process as well as the spatial clustering process. The tracking information can also be useful in determining the useful direction of the signpost, the lane to take, and other useful information as some lanes. 
To distinguish the real signboards in each frame, the proposed system applies a rule-based process to each potential tracked signboard to determine whether it includes actual signs on it or other illuminated objects. Of course, the signs should also be sidelined with the potential information to show the destination points of each. 

3.2. Signboard Distance Estimation Module

For estimating the distance between the host car’s camera and the detected signboard, the proposed systems applies the perspective of range estimation of the CCD camera that was introduced by Stein, Mano, Shashua (2003). The origin of virtual signboard coordinate appears at the center of the camera lens. In the same manner, the X and the Y coordinate axes of this signboard are parallel to the same coordinates of the capture images, and the Z-axis appears along the optical axis and is perpendicular to the plane that is formed by the horizontal and vertical lines (X and Y coordinates).  A target signboard on the road that appears at a distance Z in front of the car projects to the camera’s image at the y-coordinate.  Therefore, the single-camera range estimation perspective can be useful in estimating the Z-distance between the camera-assisted car and the signboard using the below equation.
Z = k. ((f.H)/y)
With k being the factor for converting the distance from pixels to millimeters for the mounted camera at the height H, and f being the focal length measured in meters.


3.3. Research Design

The design that is used in this research is the experimental design. A CCD camera is selected, and a place for mounting the camera is determined. This point of attachment of the CCCD camera on the host car should be in such a way that it can view the signboards that are usually raised to some distance above the road surface. Being able to focus on that point will help only to view these signboards that are also usually of a specific color depending on the country. Different countries have different colors for the road signboards. The most commonly used color for the same is green. Also, the computer system is used to distinguish between the actual signboards and other boards or objects that may be placed on the roads such as advertisements and caution messages.
After the camera detects the object above the road surface, the object must be filtered using a rule-based approach to determine if it an actual signboard of it is another object. An arrow is then searched on the object because a signboard has to contain some arrow showing which lane leads to where so that the car can turn to the desired direction.  The information is given in real-time allow for making the correct decisions to be made at the right time. Thus, the distance of the car should also be calculated how it is from the signboard and at what point should it change lanes if necessary.

3.4. Data Collection, Subject Selection, and Description

The experimental car is given by NY Toyota car dealers while the CCD cameras are acquired from the Princeton Instruments.  The JPEG files also used to display the images in 2-dimensional view while the STL files present the images in 3-dimensional.  The features of the signboard are extracted and then recognized by the image processing system after which the tracking process is done based on the optical flow to reduce the complexity of computing.

3.5. Procedure

        i.            First of all the frame is captured by the camera after that focuses on any object that is raised above the road surface.
      ii.            The region of interest is then extracted from the captured images. This region of interest is the arrow that is usually found on the signboard showing the direction to where different lanes lead.
    iii.            The computing system then reads the identified point at the center of the image which has been determined to be a real signboard.  The head and the tail parts of the arrow have to be examined and interpreted to determine the direction of each lane.
    iv.            The distance of the signboard from the car is then determined to find out at what point the car should turn or change the lane to get to the targeted destination.
      v.            A tracking process then follows to determine if the capturing and processing have been done correctly, after which the computer system helps the driver to make the final decision.
To effectively assess the obtained results, a statistical hypothesis was developed to find out if the vision-based driver assistant system is more effective as compared to the manual means of watching for the signboards and then making the correct decision.  As part of this analysis, below are the hypotheses that were developed.
H0: The vision-based driver assistant system of signpost recognition is more effective in detecting the signboards on the road and determining the lane to take in a junction with multiple lanes.
H1: The use of the vision-base driver assistant system for detecting the signboards on the road is not as effective as the manual method of detecting the signboards and making the correct decisions.

3.6. Limitations

The one main limitation of this study is that the cameras may not work well during the night in case there is no lighting on the signboards. Also, it may be hard for the camera to perceive the signboards if the camera is mounted on a small car and the car is too close to a tall vehicle that is also carrying some goods that may obscure the camera.  The other limitation of the study is that the camera may not work well when it is raining heavily. When the car is also moving at a fast pace, and the driver does not know the road very well, he/she may make a wrong turn off a change of lane especially if they read the information from the system very late or the results are not displayed in real-time.


CHAPTER IV

 RESULTS AND DISCUSSION

4.1. Results

The experiments were performed on the highway during daytime with sufficient light. The results presented here were obtained from the image sequences captured by the camera at different points on an 80-kilometer road.  The performance of the entire system was computed over a test set of 60 stereo pair of images that had a resolution of 320 X 240 that corresponds to the 80-kilometer road.  The results were obtained by taking into account that the signboards detected were positive samples denoted by P whereas the negative samples represented by letter N were the noisy objects detected and thus no signboards. Therefore, every signboard can be detected of classified as being a true positive TP or a negative, positive FN.  A TP occurs when the prediction and the real values are positive while the TN occurs when the prediction outcome and the real values are negative.  When the prediction result is a negative while, and the actual values are positive, the outcome is a false negative.
To make the proposed system operate as anticipated, the CCD cameras receiving input sequences of the objects was mounted onto the windscreen immediately behind the windshield inside the car with adequate height to allow the capturing of the appropriate regions for encompassing the interesting signboards to be monitored as shown in figure 1 below. The view angle of the camera was then calibrated to make it parallel to the road surface with an elevation angle of 60 degrees, whereas the focal length of the camera was set as 20mm. The peripheral devices such as the image grabbing devices, in-car control devices, and the mobile communication systems were also included in the embedded platform to accomplish the internal vision-base driver and surveillance system.  The system was then tested on many videos of real road scenes under different conditions.

system for detecting signboards on the road. As the figure shows, the system consists of three buttons for system configuration, starting, and stopping.  The function of the system configuration button is to set the system values like the voice volume, traffic scene video recording, and control signaling, while the system starting and stopping buttons are used to start and stop the system respectively.

A tracked signboard was considered classified when the output from the classifier is greater than a given threshold for over five times. It is difficult to classify the signboards at night particularly the circular ones that have less variability. That was also an advantage because the circular objects had to be eliminated, as they do not represent the shape of a signboard, which is always rectangular. In that sense, the sensitivity of the circular objects at night reached not more than 80 percent.  However, the sensitivity, as well as the precision of other objects, exceeded 95 percent any lighting conditions.
One objective of this experiment was to get a system that can work real-time. In this sense, an average run-time was measured, and this is as shown in the following diagram, figure 1.  That consisted of the three processes: the process of extraction, encoding, and selection contours, the process of transformation implementation of the Hough (HT), and the tracking and classification process denoted by SVM. 


Even though the results were obtained using an offline process, the obtained runtime of 35ms with a deviation of 19 ms can allow for real-time performance of the system. In future, a real-time implementation is considered to confirm that this experiment can effectively work in real-time.
For the computational time issues, the required time for computing one input frame is determined by the complexity of the signboard objects being captured.  Most of the time spent on computing took place during the clustering process of the potential signboard objects.  Based on the system, the vision computing phases of the proposed driver assistant system require 30ms averagely to process a frame of 320 X 240 pixels while the traffic scene video recording took approximately 5ms per frame when there is hardware acceleration. The computational cost presents help to ensure that the proposed driver assistant system effectively satisfies the real-time processing demand that in this case is set at ten frames per second.  Therefore, this proposed system offers a timely assistance and warning for drivers so that they can make the correct decisions at a point where there are many possible lines to take.

4.2. Discussion

Currently, many vision-based driver assistant systems that have been proposed or developed only involve the detection of lanes, signs, or other objects on the road as the cars. Also, such systems are only aimed at reducing traffic accidents. There is not a system that has been developed to assist the drivers in identifying the correct lane to shift in a junction where there is more than one possibility, and there are signboards above the highway. In such a case, it is usually problematic for the new drivers and other drivers who may be tired or those who may be driving at very fast speeds because they may not make the correct decisions thereby ending in the wrong lanes. That is what this system addresses to bridge the gap that exists.
From the results, we can effectively determine which hypothesis we should take, whether it is H0 or H1. The effectiveness of the system thus makes it obvious to take the null hypothesis. The system that has been developed in this paper shows that the vision-based driver assistant system produces timely results that can help the types of drivers identified above to make the correct decision when on a highway and there are more than one exit lanes from the main highway. It was observed that the CCD camera is very effective when used in the proposed system because it produces high-quality images that make it easy for the computing system to determine if the object is a real signboard or it is a different object.  These high-resolution images captured by the CCD camera also make it easy to identify the signs represented by the arrows on the captured objects to determine if they are actual signboards or not.
The type of images that were captured by the CCD camera was the JPEG images and the frames measured 320 by 240. For a small screen like the one that was used by the system, this can be a good resolution. Other researchers have preferred to capture images and present them as VGA images, but all depends on the display system that is being used to display the images. It is also worthy to note that the system and even the images require portability. The JPEG images captured in this experiment can be portable because almost any type of screen including the LED and the LCD screens can display these images. The JPEG images are also not computationally intensive; hence, this system is cost effective in that area. That is why these images were processed very first by the system.
The camera was able to capture the objects that were raised to some distance above the road surface.  Many times the camera was able to perceive the signboards because there were no obstructions such as big vehicles carrying tall heaps of luggage. The car had to reach to a certain point where the angle would be 60 percent to the signboard and then capture the image. Sometimes when negotiating a corner, there were some images being captured at the same angle the system would only report those images after successfully processing them and determining if they are actual signboards. With this kind of intelligence, the system can help distracting the driver when the camera captures the other images that are not the actual signboards.
The user interface of the system also helped largely to manipulate it and use only when there was a need to do so. The system’s user interface consisted of the system starting, system stopping, and the system configuration buttons. The system’s start button helps the driver start using this intelligent system while the stopping button helps the driver to turn the system off when he/she does not want to use it. However, since the environment for testing the system was on a highway, the system was not turned off throughout since it was required to be used. Also, the driving speed of 60km/h helped the system to capture and process the images as desired effectively. At a very high speed exceeding 100km/h, the system may not be effective although this was not tested by this experiment.
From the experiment, it is also observed that the camera can capture up to 12 images per second and this is one of the strengths of this system. Many systems that have been developed before usually capture up to seven images per second and so this is a great improvement to the vision-based driver assistant system.  In many highways, it is usually hard to find more than five signboards on the same point of the highway, but there are also times when you may find some advertisements or road caution boards also put alongside the signboards, thus it is vital for the system to capture as many images as possible per second. That will also help the system to apply to any highway, even the one that may be having several objects together with the signboards at the same point on the highway.
A set of different objects has been classified in the system as circular, triangular, or rectangular. This large set of the shapes is a great improvement concerning the other works. That means that the system can be manipulated to be used to detect other road signs that are in various shapes including among the ones mentioned above, the diamond shapes. Also, the testing of the experiment in the real world has helped to ensure a high recognition rate of the signboards. It makes the system to be reliable because it can be used the way it is without any modifications. That also makes it different from many similar systems that are merely simulations and not real experiments. As seen from the results, the precision and sensitivity are above 95 percent, and it reduces only during the extreme weather conditions as well as during a night where the signboard is not adequately illuminated. 
Additionally, using the 12V communication system presents a novel solution to the issue of discarding those detected objects that do not pertain to actual signboards instead of utilizing geometrical constraint. In any research, coming up with novel ideas on how to solve the impending problem is the main objective, and in this research, that has already been achieved. The system can also be programmed to make it multipurpose by introducing other functionalities like identifying the road signs, identifying pedestrians on the road, identifying other vehicles in front and even behind, and doing accurate readings even in extreme weather conditions.
The vision-based technologies were integrated and implemented using an ARM-DSP multi-core platform that includes peripheral devices such as mobile communication and image grabbing devices. These modules and other in-vehicle control systems were integrated so that an in-car embedded vision-based driver assistance, as well as surveillance system, can be accomplished. The experimental results show that this proposed system can be effective and offer benefits for integrated signboard detection and traffic event recording. All these factors help the driver to make desired surveillance in different road environments as well as traffic conditions.  That makes it possible also to make minimal configurations on the system and then apply it in other instances on the traffic detection.



CHAPTER V

 CONCLUSIONS AND RECOMMENDATIONS

5.1. Conclusions

This paper has presented a vision-based signboard recognition system that can assist the drivers driving on a highway to make prudent decisions. Many highways usually have several lanes you may be required to keep on shifting from one lane to another depending on your destination. With this system, the driver should not have to worry since it does for him/her almost everything they need when especially they are confronted with a dilemma on which lane to shift to, and there are signboards above the highway. The system developed in this case can make very quick calculations and processing of the images received from the camera that is mounted below the windscreen just behind the windshield.  One of the key strengths of the system is that it allows the detection and processing of the images in real-time. To receive results in a second means that you are receiving the results in real-time and thus there is no distraction.
Also, the experiment showed that the encoding of contours helps to solve, in many instances the issues of bifurcations, discontinuities, and the change of direction. Thus, makes this proposed vision-base driver assistant system for detecting signboards to be reliable and accurate because it obtains an average detection rate of 95 percent in all the lighting conditions. It is also applicable to the triangular, rectangular, and arrow signs and objects.  The fact that this system can also filter out other shapes that do not constitute a signboard makes it reliable.  Furthermore, the detection of the signboard in this system is adaptive because of the use of adaptive thresholds and the application of the Hough transform based on the information the system receives from every contour.


5.2. Recommendations

The system developed in this case should be compared with other existing approaches for clearly offering the baseline of improving the results obtained in the system. However, it is impossible to make such comparison because there are no common criteria or frameworks for evaluating the traffic sign detection systems. I recommend that there be frameworks that can form the basis for evaluating the traffic sign and signboard detection systems. If such frameworks are available in the future, this research will also make sure that the comparison to find the areas of improvement. Another area that also presents an opportunity for improvement is the area of sensor installation. Sensors can be installed on all the signboards to help the vision-based systems to detect them easily. That will also help to reduce the need to install expensive systems in the vehicles and will only require these vehicles to have less expensive sensors and transmitters. In the future, that is what this research will focus on.
Additionally, in future, the research will focus on using the vehicle dynamics, such as the vehicle trajectory, yaw rate, speed changes, vehicle direction, and steering wheel position among others. The essence of using these vehicle dynamics is to improve the robustness of the process of discarding the unwanted traffic signs or objects. Automatic traffic sign recognition for a driver assistant system can be very important, although it also embodies other possible applications like the tracking of the inventory system of the traffic signs and automatic inspection of these signs to provide a safer response as well as a better maintenance signposting. Such an automatic system can also help in building as well as maintaining of the maps of road signboards and other traffic signs. All these applications will entail a challenging research work for the future.
In the future, the other improvement and extension that can be made on this vision-based driver assistance system are the integration of some complex machine learning techniques like Support Vector Machine classifiers on many cues such as on the car lights and bodies. That will help to further enhance the signboard and other traffic signs’ detection feasibility under night and difficult weather conditions. Also, it will also improve the classification capability on those vehicle types that are more comprehensive like the sedans, Lorries, buses, trucks, and motorbikes. It is also likely that the research in other areas like lane detection can be of great benefit here. Additionally, the idea is concerning the surrounding; the connection between the knowledge of the weather and the lighting conditions at a given time can enhance the robustness of the system.  Otherwise, such a system can be more useful during the night compared to during the day.


Barnes, N. & Zelinsky, A. (2004). Real-time radial symmetry for speed sign detection. Proceedings of the Intelligent Vehicles Symposium; Parma, Italy. 14–17 June 2004; pp. 566–571.
Broggi, A., Cerri, P., Medici, P., Porta, P. P., & Ghisio, G. (2007, June). Real time road signs recognition. In Intelligent Vehicles Symposium, 2007 IEEE (pp. 981-986). IEEE.
Buciu, I., Gacsádi, A., & Grava, C. (2010). Vision-based approaches for driver assistance systems. Proc. ICAI'10, 92-97.
De La Escalera, A., Armingol, J. M., Pastor, J. M., & Rodríguez, F. J. (2004). Visual sign information extraction and identification by deformable models for intelligent vehicles. IEEE transactions on intelligent transportation systems, 5(2), 57-68.
García-Garrido, M. A., Ocana, M., Llorca, D. F., Arroyo, E., Pozuelo, J., & Gavilán, M. (2012). Complete vision-based traffic sign recognition supported by an I2V communication system. Sensors, 12(2), 1148-1169.
Gavrila, D. (1999). Traffic sign recognition revisited. Proceedings of the DAGM-Symposium; Bonn, Germany. 15–17 September 1999; pp. 86–93.
Huang, S. S., Chen, C. J., Hsiao, P. Y., & Fu, L. C. (2004, April). On-board vision system for lane recognition and front-vehicle detection to enhance driver's awareness. In Robotics and Automation, 2004. Proceedings. ICRA'04. 2004 IEEE International Conference on (Vol. 3, pp. 2456-2461). IEEE.
Loy, G. & Zelinsky, A. (2003). Fast radial symmetry for detecting points of interest. IEEE Trans. Pattern Analysis for Machine Intelligence, 25, 959–973.
Loy, G., & Barnes, N. (2004, September). Fast shape-based road sign detection for a driver assistance system. In Intelligent Robots and Systems, 2004.(IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on (Vol. 1, pp. 70-75). IEEE.
Mogelmose, A., Trivedi, M. M., & Moeslund, T. B. (2012). Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey. IEEE Transactions on Intelligent Transportation Systems, 13(4), 1484-1497.
Morris, B., & Trivedi, M. (2010, June). Vehicle iconic surround observer: Visualization platform for intelligent driver support applications. In Intelligent Vehicles Symposium (IV), 2010 IEEE (pp. 168-173). IEEE.
Stein, G.P., Mano, O., & Shashua, A. (2003). Vision-based ACC with a single camera: Bounds on range and range rate accuracy. Proceedings of IEEE Intelligence Vehicle Symposium, 2003:120–125.
Sun, Z., Bebis, G., & Miller, R. (2002). On-road vehicle detection using Gabor filters and support vector machines. In Digital Signal Processing, 2002. DSP 2002. 2002 14th International Conference on (Vol. 2, pp. 1019-1022). IEEE.
Trivedi, M. M., Gandhi, T., & McCall, J. (2007). Looking-in and looking-out of a vehicle: Computer-vision-based enhanced vehicle safety. IEEE Transactions on Intelligent Transportation Systems, 8(1), 108-120.
Viola, P. & Jones, M. (2001). Robust real-time object detection.  International  Journal of Computer Visual, 57(2), 137–154.
Sherry Roberts is the author of this paper. A senior editor at MeldaResearch.Com in Write My Essay Today services. If you need a similar paper you can place your order from pay for research paper services.

No comments:

Post a Comment