year 3: progress report 3 - jake garrison · 4 figure 3 following distance per frame validation...

13
1 ADAS Hardware Tool Check 3/24/16

Upload: others

Post on 21-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Year 3: Progress Report 3 - Jake Garrison · 4 FIGURE 3 FOLLOWING DISTANCE PER FRAME Validation code compiled from C was also used to help validate the real-time predication Matlab

1

ADASHardwareToolCheck3/24/16

1

Year 3: Progress Report 3

April 10th, 2014

Page 2: Year 3: Progress Report 3 - Jake Garrison · 4 FIGURE 3 FOLLOWING DISTANCE PER FRAME Validation code compiled from C was also used to help validate the real-time predication Matlab

ii

TableofContentsA-ACTIVITY1:VEHICLEIDENTIFICATION........................................................................................................................1

A.1-ProgramDescription..........................................................................................................................................1A.2-ImageProcessingTweaks..................................................................................................................................2A.3-Detection...........................................................................................................................................................2A.4-Prediction...........................................................................................................................................................2A.5-OutputVideoResults.........................................................................................................................................3

B-ACTIVITY2:SIGNIDENTIFICATION..............................................................................................................................5B.1-ProgramDescription..........................................................................................................................................5B.2-ObjectDetection................................................................................................................................................5B.3-ColorBasedDetection.......................................................................................................................................6B.4-CompetitionProvidedVideoResults................................................................................................................6B.5-TeamProvidedVideoResults............................................................................................................................7

C-ACTIVITY3:S32VINTERFACEDEMONSTRATION........................................................................................................9C.1-Workspace.........................................................................................................................................................9C.2-Research............................................................................................................................................................9C.3-Implementation.................................................................................................................................................9C.4-Issues..................................................................................................................................................................11

ListofFiguresFigure1Blockdiagramofactivity1code...................................................................................................................................1Figure2Vehiclesuccessfullydetected(green)andanother(red)whichisnotofinterest......................................................3Figure3Followingdistanceperframe........................................................................................................................................4Figure4DetectionusingtheCbasedcode................................................................................................................................4Figure5Blockdiagramofactivity2code..................................................................................................................................5Figure6Javaimagescrapertopopulatetrainingdatabase......................................................................................................6Figure8Competitionprovidedvideooutputframe..................................................................................................................7Figure9Showsfalsepositivesofcolordetector.......................................................................................................................8Figure10Stopsigndetectionusingonlythecascadedetector.................................................................................................8Figure11WorkspaceincludesaLinuxWorkstation,NXPS32VBoard,andMacbookPro........................................................9Figure12Blockdiagramofactivity3........................................................................................................................................10

Page 3: Year 3: Progress Report 3 - Jake Garrison · 4 FIGURE 3 FOLLOWING DISTANCE PER FRAME Validation code compiled from C was also used to help validate the real-time predication Matlab

1

A- Activity1:VehicleIdentification

A.1- ProgramDescriptionThevehicleidentificationcode,car_detect.midentifiesnearbyvehiclesandtracksthevehicledirectlyinfront.ThecodecanbebrokendownintofivemainpartsasidentifiedinFigure1below.TheIOstagesimplyprocessesframesinorderandreadsandwritestothefilesystem.Thestereostagetakesthestereoframesandgeneratesapointcloudsuchthatadepth(inmeters)isassignedtoeachpixel.ThestereocodeisverysimilartotheprovidedMatlabexample,andtheinputvideoisalsofromtheMathworksexamples.Thoughitwouldhavebeenidealtouseothervideosources,therewere initial issuesworkingwith other stereo videos. Despite the low FPS and jumps in theMathworks video, it stillprovidesa gooddemonstrationofour code.Note that thestereoParams.mat file isused to calibrate the stereocamerasand thethresholdPC.m applies thresholdbounds to thepoint cloud.The tweaks section is comprisedof

image processing techniques aimed to increase the likelihood of a correct vehicle detection. In the detect blocks, atrained cascade object detector is used to identify possible vehicles by returning the coordinates for one or manyboundingboxes.Thedetectionalgorithmreliesonapre-traineddetector.Althoughcustomdetectorsweretested,thefinal submissionuses theMathworksprovidedCarDetector.xml. Thedetection stagealso attempts to score the

boundingboxessothatifmultiplevehiclesaredetectedinasingleframe,theonethatismostlikelytobethevehicleofinterestiscoloredgreen(highscore)andallothersarered.Finally,asafallback,ifdetectionfails,thePredictioncodeattempts to predict the location and distance of the vehicle in front using a weighted sum and simple parameterthresholds.Thepredictioncodeisrepresentedwithacyanboxannotationandisonlyexecutedifthedetectorfails(30%oftheframes).

FIGURE1BLOCKDIAGRAMOFACTIVITY1CODE

Page 4: Year 3: Progress Report 3 - Jake Garrison · 4 FIGURE 3 FOLLOWING DISTANCE PER FRAME Validation code compiled from C was also used to help validate the real-time predication Matlab

2

A.2- ImageProcessingTweaksInordertoimprovedetectionofvehicleslocateddirectlyaheadofthecamera,aregionofinterest(ROI)isimplementedwhichselectsaportionofeachframetoanalyze,insteadoftheentireframe.Thisbothincreasesprocessingspeed,andimprovesdetectionaccuracy.TheusercantoggleanoptiontospecifyanROIoruseadefaultROI.IftheusertogglestheoptiontospecifyanROI,thenthefirstframeofthevideoappearsandaregion isdefinedusingaresizablerectangle.Double-clicking confirms the selection and further processing will use that ROI specified by the user. An additionalenhancementtoimprovedetectionisincreasingthebrightnessoftheROI.Theinputvideohasmanyshadowsanddarkcarsthatarenot identifiedbythedetector.AfterconvertingtotheYCbCrcolorspace,thebrightness is increasedandconvertedbacktotheRGBcolorspaceforfurtherprocessing.Adjustingthebrightnessallowedfora15%increaseinthevehicledetectionstage.Inproductioncode,theROIandbrightnesstweakscouldbedynamicbasedontheenvironmentandmountedcameras.

A.3- DetectionThedetectionstage is theheartof theprocessing. It reliesonthecascadeobjectdetectorand itsability torecognizeobjectsafterbeingtrainedwithpositiveandnegativeexampleoftheobjectofinterest.Mathworksprovidedadetectorthatseemstoworkwell,butissomewhatofablackbox,sinceitisunknownhowmanyimageswereusedtotrainthedetector.AdditionalcustomdetectorsweremadeasanattempttooutperformtheMathworksone,butnosignificantimprovements were observed.When the detector is executed on a frame, it returns the coordinates of boxes thatboundeachvehiclethathasbeenidentified.Eachboxisthenanalyzedandgivenascorethatrepresentsthelikelihoodofbeingthevehicleofinterest.Thisscoreisbasedontheheighttowidthratio,pixellocationandoverlapwithpasthighscoring boxes. Since it is assumed vehicles don’tmove thatmany pixels between frames, thismethod is effective atdetectingandtrackingvehicles,however,ifthedetectordoesn’treturnanyboxes(whichiscommon),noneofthiscodewillexecuteandtheframewillfailtoannotatetheframewiththevehiclelocation.Forthisreason,thepredictioncodewasaddedwiththehopesofdrawingaboxoneveryframe.

A.4- PredictionInordertoimproveuponthecascadeobjectdetector,predictivetechniqueswereimplementedtoaccountforthecasewherethedetector fails todetectacar inthecurrent frame. If therehasbeensufficientdatacollectedonpreviouslytrackedvehiclesandnovehiclewasdetectedinthecurrentframe,thepredictivealgorithmisexecuted.Thepredictionalgorithmusesaweightedsumofuserspecifiedlengthandpreviouslycollecteddataintheformofaqueuetocomputeapredictedboundingbox.Theweightsaresetupsuchthatnewerframeshaveahigherinfluenceonpredictions.Sincetheweightvectorisuserspecified,theweightscanbeoptimized.

! = #$%$&$'(

#$&$'(

Equation1:Weightedsumwhereyispredictedbox,wisnon-normalizedweights,andxispreviousframedata

Afterpredictingtheboundingboxofthevehicle,anoverlaperroriscalculatedtodecidewhetherthepredictedboxissufficiently accurate. The region of overlap between the predicted and the most recent detected bounding box iscomputedtogiveanoverlaperror.Thisvalueisthennormalizedtogiveanoverlaperrorrangingfrom0to1;where0impliesa goodprediction, and1 impliesabadprediction. If theoverlaperror iswithina specified thresholdand the

Page 5: Year 3: Progress Report 3 - Jake Garrison · 4 FIGURE 3 FOLLOWING DISTANCE PER FRAME Validation code compiled from C was also used to help validate the real-time predication Matlab

3

cascadeobjectdetectorfailstodetectavehicleforthecurrentframe,thenthepredictedboundingboxisdrawnonthecurrent frame in thecolorblue. Inadditionto thepredictedblueboundingboxbeingdisplayed,anoverallpredictionerrorisalsoreportedinwhiteasapercentage.Thepredictionerroriscalculatedasasumoftheoverlaperrorandthedistance error, where distance error is the difference in distance between the predicted and most recent detectedboundingbox.Theaverageoverlappredictionerroris0.17,meaningan83%overlapexistsbetweenthepredictedboxandpreviousdetections.Thedistanceerrorcomesouttobe0.0271mwhichequatestoalessthan3cmdeviationinthefollowingdistanceprediction.ThoughthecomplexityofthisalgorithmissimplecomparedtosomethinglikeaKalmanfilter,orneuralnetwork,itdemonstrateshowafallbacktodetectioncanbeusedtoincreasetheconfidenceandbetterachievethegoaloftrackingthevehicleinfront.

A.5- OutputVideoResultsThecodeisexecutedbyrunningthecar_detect.mcode.Thevideoplayerwillopenandplaytheframes(slowerthan

real-time). On each frame, annotations are added depending on if the code detects or predicts a vehicle. For eachsuccessfuldetection/prediction,abox isdrawnwitha labelof thesyntax: ‘D:Xm,C:AxBpx’where ‘D’ refers to thedistance to the vehicle and ‘C’ refers to the center point of theboundingbox relative to the center,where x centerincreasesfromrighttoleftandyincreasesfromtoptobottom.AlthoughtherubIfthetext‘Tracking…’isdisplayedinthelowercenter,thedetectioncodehasidentifiedahighscoringvehiclewhichisidentifiedwithagreenbox.Redboxesarelowscoringboxesthataregenerallyavehicle,butnottheoneofinterest.Ifdetectionfailsandpredictionsucceeds,‘Predicting…’willbedisplayedinthelowercenterandacyanboundingboxispresent.

Inanalyzingtheresults,116greenboxesand50predictedboxesaredisplayedoutof181frames;atotalof166outof181wellexceedsthe50%detectionrequirement.AsampleframeisshownbelowinFigure2.

FIGURE2VEHICLESUCCESSFULLYDETECTED(GREEN)ANDANOTHER(RED)WHICHISNOTOFINTEREST

Thefollowingdistancetothegreen/cyanboxesisplottedbelowinFigure3.Uponvisualinspectionandcomparisontotheoutputvideo,thisseemscorrect.Thesharpjumpsanddiscontinuitiesarearesultofthechoppyframerateoftheinputvideo,andfalsepositivedetections.

Page 6: Year 3: Progress Report 3 - Jake Garrison · 4 FIGURE 3 FOLLOWING DISTANCE PER FRAME Validation code compiled from C was also used to help validate the real-time predication Matlab

4

FIGURE3FOLLOWINGDISTANCEPERFRAME

ValidationcodecompiledfromCwasalsousedtohelpvalidatethereal-timepredicationMatlabcode.Thevalidationcodeismuchmoreaccurateatidentifyingandtrackingvehiclesandcanbeusedasabenchmarkmodel.Asourrealtimecode improves, it can continuously be compared and validated against this benchmark. Unfortunately, this code isanythingbutreal-timeasittakesabout5secondsperframetofindallvehicles(noROIorotherprocessingisusedtospeedup).ThismethodusesKalmantrackingandamuchmoreadvancedvehicledetectionmodeltobothidentifyandtrack vehicles. Each vehicle is given a uniquely colored rectangle box. Also, note that this code does not use stereoinformation.Itispurelyfordetectingandtracking.Thecodecanbegivenuponrequestasitisnotadirectrequirementforthisactivity.AscreenshotisshownbelowinFigure4.

FIGURE4DETECTIONUSINGTHECBASEDCODE

Page 7: Year 3: Progress Report 3 - Jake Garrison · 4 FIGURE 3 FOLLOWING DISTANCE PER FRAME Validation code compiled from C was also used to help validate the real-time predication Matlab

5

B- Activity2:SignIdentification

B.1- ProgramDescriptionThegoalofthisactivityistoidentifystopsignsinagivenframe.Themaincodefileisstopsign_detect.mandthehelperfunctions,train_detector.m,annotate_box.m,andextract_red_sign.marealsoreferenced.Twomethodsarepresentedtoachievethisdetection.First,thecascadeobjectdetectorwhichutilizesadetector,trainedbyourteam,toidentifystopsignsinvariouslightingandangle.Ifstopsignsareidentified,theboundingbox(s)isreturnedandannotatedasacyanboxontheframe.Ifnosignisdetected,theexperimentalcolorbaseddetectiontriestofindastopsignusingcolorthresholdingandblobdetection.Similarly,anyidentifiedstopsignsarerepresentedbyaboundingbox and annotated in green. Just as in Activity 1, an ROI is used to improve detection and performance. The blockdiagramprocessisshownbelowinFigure5.

FIGURE5BLOCKDIAGRAMOFACTIVITY2CODE

B.2- ObjectDetectionThecodeforobjectdetectionisverysimilartoActivity1.ThemaindifferenceisthatitreliesonadetectorgeneratedbytheUWEcoCARteam.Thiswasthebiggestchallengeoftheactivitysincethegoalwastodetectthesignin>90%oftheframes.Totrainthedetector,a largedatabaseofpositiveandnegativesamplesneededtobeobtained.Tomakethisprocesseasier,asimpleJavaprogramwasdevelopedtodisplaythefirst100Googleimagesearchresultsforacertainquerylike‘stopsignphoto’.TheGUI,showninError!Referencesourcenotfound.,allowsonetoeasilychoosewhetherornottoputthatresultintothedatabaseaseitherapositiveornegativeexample.Aftercollectingroughly300positivestopsign imagesandabout1400negative imagesof roads thatdidnotcontain stop signs, itwasbelieved therewasenoughtrainingdatatocreateanaccuratedetector.Afterexperimentationandtuninginthetrain_detector.m,itwasdeterminedthatusing10stageswithaFARof0.4yieldedtheoptimumcombination.

Onceasignis identified,theannotate_box.m functioniscalledtodrawacyanboxindicatingtheobjectdetectorfoundthesign.Despiteoureffortstocreateanaccuratedetector,only20%detectionwasachievedinthecompetition

Page 8: Year 3: Progress Report 3 - Jake Garrison · 4 FIGURE 3 FOLLOWING DISTANCE PER FRAME Validation code compiled from C was also used to help validate the real-time predication Matlab

6

providedvideo.Thiscouldbeafailureinourtrainingmethod,limitationintheMathworksdetectororaproblemwithour code.Rather thancontinuing the tediousworkofbuildinga trainingdatabase,a colorbaseddetectionalgorithmwasaddedasanexperimentalfallback.

FIGURE6JAVAIMAGESCRAPERTOPOPULATETRAININGDATABASE

B.3- ColorBasedDetectionToachieve>90%detection,thestopsignneededtobeidentifiedevenwhenthesignisrepresentedasjustafewpixels.Todoso,wetookadvantageofthestopsign’suniqueredcolorandappliedcolorthreshholdingintheHSVcolorspacetocreateabinarymasksuchthatredpixelsare1andeverythingelseis0.Thisessentiallycreateswhiteblobsinregionswithredcolor.Thistypeofthresholdingoftenyieldssmallspecsofwhitepixelsscatteredacrossthemask.Sinceweareinterested in a group of pixels, the function bwareaopen(BW , 70) is applied to the binary mask, effectivelyremoving white clusters smaller than 70 px, and leaving the larger blobs. Next, the morphological operationimclose(BW, strel('disk', 15)) is applied to solidify the blobs by filling any holes smaller than 15 pixelswithwhite.Allofthisprocessingtakesplaceintheextract_red_sign.m function.Followingthis,regionpropscommandisused,whichwillcreateasetofboundingboxesforeachblob.Then,eachbox intheset isanalyzed,andonly boxes exhibiting a ratio close to a square are kept since a stopsign’s height and width are equal. Finally, the annotate_box.m functioniscalledtodrawagreenboxindicatingthatthecolordetectorfoundthesign.

B.4- CompetitionProvidedVideoResultsRunning the stopsign_detect.m code is similar to Activity 1. As stated, there are two types of annotations,

indicatedbythetextinthebottomcenteroftheframe.Eachboxislabeled‘DimAxBpxCenter:CxDpx’asdefinedbytherules,where‘Dim’istheboxdimensionand‘Center’isthecoordinatesoftheboxcenter,wherexcenterincreasesfromrighttoleftandyincreasesfromtoptobottom.Withthecolordetectionenabled,thecodeidentifies229boxesinthe 368 frameswhich is fair given that the stop sign is the only present until about frame 240. This 95% (229/240)detectionrateisahugeimprovementtoour20%detectiononlyusingthecascadeobjectdetector.AnexampleofthecascadedetectorfindingthesignisshownFigure7below.

Page 9: Year 3: Progress Report 3 - Jake Garrison · 4 FIGURE 3 FOLLOWING DISTANCE PER FRAME Validation code compiled from C was also used to help validate the real-time predication Matlab

7

FIGURE7COMPETITIONPROVIDEDVIDEOOUTPUTFRAME

Unfortunately, theexperimental colordetectiononlyworkswhen there isnota lotof redpixels in the frames. If forexample, a red cardroveby, itwould likelydetect the carasa stop signgiven thewidth toheight ratio fits into thespecifiedbounds.Forthisreason, itwasdisabledfortheteamvideosubmission.Withfurthertuning,thiscolorbasedmethodcouldprovetobeaformidablefallback,butasofnowitshouldbeconsideredexperimental

B.5- TeamProvidedVideoResultsTheteamsuppliedvideocamefromYouTube(link)andwasdownloadedwithstandardquality.Asidefromdisablingthecolor based detector and tweaking the ROI, the code is equivalent to the competition provided submission. The ROIcouldeventuallybeself-tunedanddynamic;however,itisfixedfornow.Thisvideohasredpixelsotherthanstopsigns,soit iseasytoseewhythecolorbaseddetectionisnotasreliable.Commonfalsepositivesincludetaillights,redcars,buildingsandotherredsigns.AnexampleframeisshownbelowinFigure8.

Improvementstocolorbaseddetectionmainlyfallunderthetuningcategory.AsimilarpredictionalgorithmasshowninActivity1couldbeusedtobetteridentifystopsignsbasedonpreviousframes.Thishistoricaldatacouldalsobeusedtotune the height to width ratio and help reduce false positives. Additionally, signs are static and many of the falsepositivesareattemptstotrackmovingvehicles.Filteringcouldbeaddedtopreventdetectionofmovingobjects.Finally,opticalcharacterrecognition(OCR)couldbeusedtoattempttoidentifythecharacters‘STOP’inadetection.Ifnoneofthelettersareseen,thedetectioncouldbeignored.This,however,wouldgreatlyimpactthereal-timeperformance.Avalidationmodel, liketheonedemonstratedinActivity1,couldbeusedtoprovideabenchmarkfordetection.Suchamodelcouldmeticulouslyscaneachframeforstopsignsandoutputboxcoordinates.Thenwhenthereal-timemodelistested,theoutputtedboxescouldbespatiallycomparedtothebenchmarkresultsandconvertedtoanerrorparameterthatcanbeusedforiterativelytuningthemodel.

Page 10: Year 3: Progress Report 3 - Jake Garrison · 4 FIGURE 3 FOLLOWING DISTANCE PER FRAME Validation code compiled from C was also used to help validate the real-time predication Matlab

8

FIGURE8SHOWSFALSEPOSITIVESOFCOLORDETECTOR

Thecascadeobjectdetectordidmanagetofindallof thestopsignsasidefromonethatwasalmostparallelwiththeroad.Anexample frame is shown inFigure9.Themain issue is that itusuallydoesn’tdetect the signuntil thecar iswithin 10 meters away. To improve this, the detector could be trained with smaller images of stops signs.Transformationsandrotationscouldalsobeappliedtothetrainingsettohelpidentifysignsatdifferentangles.Amoreadvanced classifiermodel could also be used to dramatically improve performance. A recurrent neural network, forexample, could autonomously train and tune a complex network for classifying stop signs as well as other signs. Insummary,moretrainingdataandamorecomplexdetectionschemecouldgreatlyimproveresults.

FIGURE9STOPSIGNDETECTIONUSINGONLYTHECASCADEDETECTOR

Page 11: Year 3: Progress Report 3 - Jake Garrison · 4 FIGURE 3 FOLLOWING DISTANCE PER FRAME Validation code compiled from C was also used to help validate the real-time predication Matlab

9

C- Activity3:S32VInterfaceDemonstration

C.1- Workspace

FIGURE10WORKSPACEINCLUDESALINUXWORKSTATION,NXPS32VBOARD,ANDMACBOOKPRO

Theworkspace,shown inFigure10above, includestheNXPS32VvisionboardequippedwithOpenCVandtheVisionSDK 9.1. The included Sony camera was mounted to slot B, a Linux workstation is utilized for building code and aMacBookwasusedforplayingtheYouTubeinputvideo(link).InordertocommunicatebetweentheremoteboardandhostLinuxmachine,anSSHserverwassetupontheboard.TosetupSSH,the/etc/networks/interfacesfilewaseditedtoincludethestaticIPaddress,netmask,andgateway.Withthisworkspacesetup,filesontheboardcouldbeedited remotelyusingssh, andcodecouldbecompiledand transferred to theboard fromtheworkstationusingscp.TheMacBookwasusedtodisplayreal-timevideosinorderemulatewhatacarmaybeexposedtoontheroad.

C.2- ResearchTodevelopthissubmission,theVisionSDKenvironmentandtoolchainwerestudied.Thedemoisp_csi_dcu.cppsource code was useful in figuring out how the camera collects frames and processes information. Theface_detection.cpp codewas helpful for understanding how to integrateOpenCV libraries into the codebase.After becoming comfortable with the supplied demos, the submission code was started. The main code is calledisp_framerate.cppand isheavilybasedoffoftheisp_csi_dcu.cppdemo.Thereweretwomainobjectivesforthesourcecode;first,calculateframerateandthen,printtexttoeachframe.

C.3- ImplementationTocalculatetheframespersecond(FPS)elapsedtime, forasingleframewasstored.TheFPSwasthencalculatedas.

Equation2:FPSCalculation

Fordebuggingpurposes,theFPScalculationwasprintedtotheconsole.Thenextstepwastoprintthisinformationoneachframebeforeoutputtingtothedisplay.Toaltertheframes,theoptimizedOpenCVlibrarieswereemployed.ItwasconfirmedthatthenecessaryOpenCVlibrarypathswerelinkedcorrectly,thenthenextstepwastodisplaytextonthescreen.

Page 12: Year 3: Progress Report 3 - Jake Garrison · 4 FIGURE 3 FOLLOWING DISTANCE PER FRAME Validation code compiled from C was also used to help validate the real-time predication Matlab

10

FIGURE11BLOCKDIAGRAMOFACTIVITY3

The line: Mat frame = Mat(720, 1280, CV_8UC3, lpFrame) allows the conversion to OpenCV’s classcv::Mat froma void * lpFrame, (a pointer returned fromOS Abstraction Library (OAL)memory allocationfunction). The cv::Mat frame is used to draw text on the frame using cv::putText function, which providesparameterssuchasfontface,fontscale,color,thickness,etc.Oncethetextwassuccessfullydisplayedonthebottomleftofthescreen,itneededtoberotatedtocorrectorientationsincethetextwasshownupsidedownrelativetothecamera frames. This issue stems from the fact that thebuilt-indisplaywas installedupsidedown. Though it is not arequirementofthisactivity,theissuewasinvestigatedtomakethedisplayeasiertoread.Therearetwostageswheretheframerotationcanbeadjusted;preandpostrotation.Pre-rotationisdirectlyhandledbytheSonycamerabyconfiguringcamerageometry.Thecodebelowflipsthedisplaysuchthattheframeisshownupsidedownwithoutanyadditionalpost-rotation. SONY_Geometry_t lGeo; SONY_GeometryGet(CSI_IDX_0, &lGeo); lGeo.mVerFlip = 1; // flip about y-axis lGeo.mHorFlip = 1; // flip about x-axis SONY_GeometrySet(CSI_IDX_0,&lGeo); Post-rotationishandledfromtheframeafter it'sbeenrepresentedasacv::Matdatastructure,whichsupportsthecv::flip function to rotate. Note that a flip operation about a single axis is not equivalent to rotation. The flipoperationshouldbeaboutboththexandy-axistobeequivalenttorotating180degrees.Thefollowingcodeperforms180degreerotationifORIENTATION issetto1. if (ORIENTATION) { flip(frame, frame, -1); } Thethirdargumenttotheflipoperationistheorientationcode.-1yieldsaflipaboutxandy-axis.Theresultsofthisrotationarenottrivial.TheFPSisabout3xfasterwhenORIENTATION == 0andthecv::flipfunction isomitted, therefore final submission initializesORIENTATION == 0.Thevideosubmissionof thedisplayoutputrotatedusingeditingsoftwaresothatitisclearertotheviewer.

Page 13: Year 3: Progress Report 3 - Jake Garrison · 4 FIGURE 3 FOLLOWING DISTANCE PER FRAME Validation code compiled from C was also used to help validate the real-time predication Matlab

11

C.4- IssuesWorkspaceissuesmostlycamedowntomisunderstandingtheVisionSDKcapabilitiesandworkflow.Therewereinitialproblemsgettingnetworkingontheboardandadditional issuesbuildingthedemos.Afterimprovedunderstandingofthe toolchain and directory structure, the necessary file system tweaks were applied and the demos were finallyfunctional. Itwasalsodiscoveredthatthemakecleanoperationsometimesdeletedheadersthatareneededtobuildthecode.Thiswasfixedbyeditingthemakefile.In the implementation, there was trouble using the OpenCV namespace, but the solutions came down to properlylinking the librarieswhenbuilding theprogram.Additionally, understandinghow to convert lpFrames to acv::Matwasastruggle,however,lookingatotherdemocodehelpedresolvethis.Lastly,therotationissueasdiscussedabove.