Academic Journal
LiDAR and Camera Fusion Approach for Object Distance Estimation in Self-Driving Vehicles
العنوان: | LiDAR and Camera Fusion Approach for Object Distance Estimation in Self-Driving Vehicles |
---|---|
المؤلفون: | G Ajay Kumar, Jin Hee Lee, Jongrak Hwang, Jaehyeong Park, Sung Hoon Youn, Soon Kwon |
المصدر: | Symmetry; Volume 12; Issue 2; Pages: 324 |
بيانات النشر: | Multidisciplinary Digital Publishing Institute |
سنة النشر: | 2020 |
المجموعة: | MDPI Open Access Publishing |
مصطلحات موضوعية: | computational geometry transformation, projection, sensor fusion, self-driving vehicle, sensor calibration, depth sensing, point cloud to image mapping, autonomous vehicle |
الوصف: | The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation, and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the detection of objects at short and long distances. As both the sensors are capable of capturing the different attributes of the environment simultaneously, the integration of those attributes with an efficient fusion approach greatly benefits the reliable and consistent perception of the environment. This paper presents a method to estimate the distance (depth) between a self-driving car and other vehicles, objects, and signboards on its path using the accurate fusion approach. Based on the geometrical transformation and projection, low-level sensor fusion was performed between a camera and LiDAR using a 3D marker. Further, the fusion information is utilized to estimate the distance of objects detected by the RefineDet detector. Finally, the accuracy and performance of the sensor fusion and distance estimation approach were evaluated in terms of quantitative and qualitative analysis by considering real road and simulation environment scenarios. Thus the proposed low-level sensor fusion, based on the computational geometric transformation and projection for object distance estimation proves to be a promising solution for enabling reliable and consistent environment perception ability for autonomous vehicles. |
نوع الوثيقة: | text |
وصف الملف: | application/pdf |
اللغة: | English |
Relation: | https://dx.doi.org/10.3390/sym12020324 |
DOI: | 10.3390/sym12020324 |
الاتاحة: | https://doi.org/10.3390/sym12020324 |
Rights: | https://creativecommons.org/licenses/by/4.0/ |
رقم الانضمام: | edsbas.1E77F3C1 |
قاعدة البيانات: | BASE |
ResultId |
1 |
---|---|
Header |
edsbas BASE edsbas.1E77F3C1 905 3 Academic Journal academicJournal 905.017944335938 |
PLink |
https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsbas&AN=edsbas.1E77F3C1&custid=s6537998&authtype=sso |
FullText |
Array
(
[Availability] => 0
)
Array ( [0] => Array ( [Url] => https://doi.org/10.3390/sym12020324# [Name] => EDS - BASE [Category] => fullText [Text] => View record in BASE [MouseOverText] => View record in BASE ) ) |
Items |
Array
(
[Name] => Title
[Label] => Title
[Group] => Ti
[Data] => LiDAR and Camera Fusion Approach for Object Distance Estimation in Self-Driving Vehicles
)
Array ( [Name] => Author [Label] => Authors [Group] => Au [Data] => <searchLink fieldCode="AR" term="%22G+Ajay+Kumar%22">G Ajay Kumar</searchLink><br /><searchLink fieldCode="AR" term="%22Jin+Hee+Lee%22">Jin Hee Lee</searchLink><br /><searchLink fieldCode="AR" term="%22Jongrak+Hwang%22">Jongrak Hwang</searchLink><br /><searchLink fieldCode="AR" term="%22Jaehyeong+Park%22">Jaehyeong Park</searchLink><br /><searchLink fieldCode="AR" term="%22Sung+Hoon+Youn%22">Sung Hoon Youn</searchLink><br /><searchLink fieldCode="AR" term="%22Soon+Kwon%22">Soon Kwon</searchLink> ) Array ( [Name] => TitleSource [Label] => Source [Group] => Src [Data] => Symmetry; Volume 12; Issue 2; Pages: 324 ) Array ( [Name] => Publisher [Label] => Publisher Information [Group] => PubInfo [Data] => Multidisciplinary Digital Publishing Institute ) Array ( [Name] => DatePubCY [Label] => Publication Year [Group] => Date [Data] => 2020 ) Array ( [Name] => Subset [Label] => Collection [Group] => HoldingsInfo [Data] => MDPI Open Access Publishing ) Array ( [Name] => Subject [Label] => Subject Terms [Group] => Su [Data] => <searchLink fieldCode="DE" term="%22computational+geometry+transformation%22">computational geometry transformation</searchLink><br /><searchLink fieldCode="DE" term="%22projection%22">projection</searchLink><br /><searchLink fieldCode="DE" term="%22sensor+fusion%22">sensor fusion</searchLink><br /><searchLink fieldCode="DE" term="%22self-driving+vehicle%22">self-driving vehicle</searchLink><br /><searchLink fieldCode="DE" term="%22sensor+calibration%22">sensor calibration</searchLink><br /><searchLink fieldCode="DE" term="%22depth+sensing%22">depth sensing</searchLink><br /><searchLink fieldCode="DE" term="%22point+cloud+to+image+mapping%22">point cloud to image mapping</searchLink><br /><searchLink fieldCode="DE" term="%22autonomous+vehicle%22">autonomous vehicle</searchLink> ) Array ( [Name] => Abstract [Label] => Description [Group] => Ab [Data] => The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation, and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the detection of objects at short and long distances. As both the sensors are capable of capturing the different attributes of the environment simultaneously, the integration of those attributes with an efficient fusion approach greatly benefits the reliable and consistent perception of the environment. This paper presents a method to estimate the distance (depth) between a self-driving car and other vehicles, objects, and signboards on its path using the accurate fusion approach. Based on the geometrical transformation and projection, low-level sensor fusion was performed between a camera and LiDAR using a 3D marker. Further, the fusion information is utilized to estimate the distance of objects detected by the RefineDet detector. Finally, the accuracy and performance of the sensor fusion and distance estimation approach were evaluated in terms of quantitative and qualitative analysis by considering real road and simulation environment scenarios. Thus the proposed low-level sensor fusion, based on the computational geometric transformation and projection for object distance estimation proves to be a promising solution for enabling reliable and consistent environment perception ability for autonomous vehicles. ) Array ( [Name] => TypeDocument [Label] => Document Type [Group] => TypDoc [Data] => text ) Array ( [Name] => Format [Label] => File Description [Group] => SrcInfo [Data] => application/pdf ) Array ( [Name] => Language [Label] => Language [Group] => Lang [Data] => English ) Array ( [Name] => NoteTitleSource [Label] => Relation [Group] => SrcInfo [Data] => https://dx.doi.org/10.3390/sym12020324 ) Array ( [Name] => DOI [Label] => DOI [Group] => ID [Data] => 10.3390/sym12020324 ) Array ( [Name] => URL [Label] => Availability [Group] => URL [Data] => https://doi.org/10.3390/sym12020324 ) Array ( [Name] => Copyright [Label] => Rights [Group] => Cpyrght [Data] => https://creativecommons.org/licenses/by/4.0/ ) Array ( [Name] => AN [Label] => Accession Number [Group] => ID [Data] => edsbas.1E77F3C1 ) |
RecordInfo |
Array
(
[BibEntity] => Array
(
[Identifiers] => Array
(
[0] => Array
(
[Type] => doi
[Value] => 10.3390/sym12020324
)
)
[Languages] => Array
(
[0] => Array
(
[Text] => English
)
)
[Subjects] => Array
(
[0] => Array
(
[SubjectFull] => computational geometry transformation
[Type] => general
)
[1] => Array
(
[SubjectFull] => projection
[Type] => general
)
[2] => Array
(
[SubjectFull] => sensor fusion
[Type] => general
)
[3] => Array
(
[SubjectFull] => self-driving vehicle
[Type] => general
)
[4] => Array
(
[SubjectFull] => sensor calibration
[Type] => general
)
[5] => Array
(
[SubjectFull] => depth sensing
[Type] => general
)
[6] => Array
(
[SubjectFull] => point cloud to image mapping
[Type] => general
)
[7] => Array
(
[SubjectFull] => autonomous vehicle
[Type] => general
)
)
[Titles] => Array
(
[0] => Array
(
[TitleFull] => LiDAR and Camera Fusion Approach for Object Distance Estimation in Self-Driving Vehicles
[Type] => main
)
)
)
[BibRelationships] => Array
(
[HasContributorRelationships] => Array
(
[0] => Array
(
[PersonEntity] => Array
(
[Name] => Array
(
[NameFull] => G Ajay Kumar
)
)
)
[1] => Array
(
[PersonEntity] => Array
(
[Name] => Array
(
[NameFull] => Jin Hee Lee
)
)
)
[2] => Array
(
[PersonEntity] => Array
(
[Name] => Array
(
[NameFull] => Jongrak Hwang
)
)
)
[3] => Array
(
[PersonEntity] => Array
(
[Name] => Array
(
[NameFull] => Jaehyeong Park
)
)
)
[4] => Array
(
[PersonEntity] => Array
(
[Name] => Array
(
[NameFull] => Sung Hoon Youn
)
)
)
[5] => Array
(
[PersonEntity] => Array
(
[Name] => Array
(
[NameFull] => Soon Kwon
)
)
)
)
[IsPartOfRelationships] => Array
(
[0] => Array
(
[BibEntity] => Array
(
[Dates] => Array
(
[0] => Array
(
[D] => 01
[M] => 01
[Type] => published
[Y] => 2020
)
)
[Identifiers] => Array
(
[0] => Array
(
[Type] => issn-locals
[Value] => edsbas
)
[1] => Array
(
[Type] => issn-locals
[Value] => edsbas.oa
)
)
[Titles] => Array
(
[0] => Array
(
[TitleFull] => Symmetry; Volume 12; Issue 2; Pages: 324
[Type] => main
)
)
)
)
)
)
)
|
IllustrationInfo |