documentcv
TRANSCRIPT
Senior Software Engineer/Consultant [email protected]
+49 170 716 2482
Profile 4-year industrial experience in design of image processing
algorithms, conventional camera module, depth-sensing
camera module, and embedded Linux software.
Learn things quickly and initiatively – has studied the whole
camera module manufacturing process, and developed the
key algorithms to improve the optical performance and
product quality of camera module.
Good at developing complex system, including algorithm
design and HW/SW integration. The recent project is 6-axis
active alignment – control a 15 degree-of-freedom machine
to align two 7x7mm components and eliminate the tilt angle
between the two components to 0.03˚.
Experience
and Major
Achievements
Lite-On [the global top 4 provider of camera module] 2011 - Present
Senior Software Engineer, Automation Division.
Responsible for developing image processing algorithms,
including OIS module calibration, active alignment, color image
pipeline, auto-focusing, auto-exposure, color correction, color-
aliasing removal, digital zooming and lens shading correction
[appendix I2 ~ I10].
Software Consultant, NPI Team and Surveillance Division.
Responsible for developing algorithm for 3D point cloud analysis,
stereo camera calibration, RGB-ToF camera module calibration,
depth estimation, and point cloud analysis with PCL library.
Additionally, YC also studies color transfer between images, and
unsupervised learning [appendix I11 ~ I14].
Machvision [AOI machines manufacturer] 2010 - 2011
Associate Research Engineer
Develop automated optical inspection (AOI) algorithms. One
major achievement is to design an algorithm to estimate the
golden image for objects which have large variety in shape and
appearance [appendix I15].
YC Cheng 鄭詠成
Certificate &
Award
Ph.D. candidate (passed the qualification exams: algorithm,
computer architecture, operating system and complexity) of
Computer Engineering Institute, National Chiao-Tung
University (35th world-wide*).
One US patent and one Taiwan patent [appendix I20 and I21].
ITRI annual paper award and CVGIP excellent paper award.
IELTS 6.0.
Major Contribution at Work
Part 1: Industrial automation. The conventional manufacturing process of camera
modules. YC’s contributions are labeled with red arrows.
Part 2: Depth-sensing camera module – calibration, depth estimation and 3D
point cloud analysis.
* The ranking is based on Shanghai Jiaotong University’s Academic Ranking of the World Universities in Computer Science Field, 2013.
Supplement
Materials
I1. Summary
The 3 strategies for bringing the customers the best image quality by:
Adaptive module assembly
Focus alignment with MTF or SFR
OIS module calibration
Active alignment (multiple regions, e.g., 9-region)
Image pipeline
Auto focus
Auto exposure
Color image pipeline, e.g. edge-aware color interpolation
Digital zooming-in
Color aliasing removal
Adaptive tone enhancement
Lens shading correction
Bi-model image modeling
Shading parameter estimation and correction
Quality control
IQ testing
Frequency component analysis
Tilt and field curvature estimation
Optical center estimation
Automatic optical inspection
Simulation of lens shading and through-focus curve
Assessment, e.g. Nokia VUP
In summary, the techniques having been used include:
RANSAC
Watershed
Mean-shift
PCA
Cubic spline
Newton's divided differences
Dynamic time warping
Camera calibration and homography transformation
Kalman filter
Fourier transform
Sigmoid function
Hybrid gamma curve
Encoded finder pattern
Circle and ellipse fitting by LSE
Image processing techniques
Machine and I/O controlling
EE (Ardruino and Raspberry Pi)
SW project management: code documenting tool, version controlling with
remote backup, trouble-shooting manual (for production line), and
knowledge base website (for internal use)
I2. Optical Image Stabilization (OIS) Module Calibration:
Find the optimal gain which can make the best shaking compensation.
I3. 6-Axis Active Alignment (AA):
Eliminate the tilt angle between sensor and lens when assembling the camera
module. In current process, the tilt angle can be decreased to 0.03° by using a
15 degree-of-freedom machine (9 motors: buffer tray + dispensing + 6-axis AA;
5 IOs: dispensing + vacuum+ de-vacuum + Gripper + UV; different machine
states: standby + PnP + dispensing + component loading + AA).
Loop1
Loop2
Loop0
I4. Color Image Pipeline – Color Interpolation:
An interpolation method optimized for edge-like content.
Ref algo.
YC Ref. A Cur
YC
I5. Color Image Pipeline –Color-aliasing Removal:
Remove the false color on the edges.
I6. Color Image Pipeline – Digital Zoom (4x):
Increase the image resolution by a frequency-domain approach.
Before After
Source 4x Res.
I7. Color Image Pipeline – Quick Auto Exposure (AE with 2 input frames):
An AE algorithm for the un-calibrated camera module under stationary
illumination.
I8. Color Image Pipeline – Lens Shading Correction:
Left: the input image (StD: 13.17 DN); right: the result (StD: 0.59 DN).
I9. Color Image Pipeline – Color Correction:
In each color patch, the upper half, the lower half and the small patch in the
lower half are the target color, the sampled color and the corrected color
respectively.
I10. Summary of Image Pipeline:
I11. Stereo Camera Calibration and Online Depth Estimation:
Use two webcams to build a stereo camera and do the online depth estimation.
The camera calibration is used for extraction of intrinsic and extrinsic
parameters. Rectification and make 3D point cloud.
Stereo camera and the camera calibration:
Depth Space Image:
I12. RGBD Camera Module Calibration:
Finding the correspondence between RGB image and depth map is essential to
the depth-related applications, such as re-focusing and generating of 3D point
cloud. To estimate the correspondence, the general idea is to find intrinsic
parameters and the relative orientation between two sensors, and then the
correspondence can be found after the objects captured by the depth sensor are
projected onto the RGB sensor.
I13. Color Transfer between Images:
The method is based on color space transformation. On the left the image
contains the target colors. The upper-right and lower-right are the original
image and the result image respectively.
s:
I14. Unsupervised Learning – Feature Selection:
Use sparse coding to obtain better feature. Spare coding is an iterative method
to find dictionary and feature vector by using matching pursuit and k-SVD
respectively.
The Dictionary:
I15. AOI - Adaptive golden image:
Find the defects on the LED cup and LED die.
Image samples and defects:
Good image samples:
Results:
I16. EE Project – Bluetooth Level Meter:
Use the InvenSense MPU6050 and Kalman filter to estimate the angle, and
then send the measurement to Android phone through Bluetooth.
I17. EE Project – Self Balancing Robot:
Use gyro, Kalman filter, PID control and I2C and PWM to balance the two-
wheel robot (code is ready and now tuning the PID parameters
I18. EE Project – Online Face Detection on Raspberry Pi:
Detecting face and overlapping the glasses image on the detected face.
I20. Master’s dissertation:
Use camera to estimate the pose of subject’s head.
(a) (b) (c)
(d)
The goal of the system is to estimate the coordinate transformation HEADTMEG
between subject’s head CHEAD and the machine CMEG. The conventional way to do
that is to use positioning coils which are attached on the subject’s head, but only can
be used before the experiments otherwise it will affect the Magnetoencephalography
(MEG) measurement. The proposed system can track the 3D coordinates of
subject’s head during experiments, and that can be used to estimate HEADTMEG and
compensate the artifacts caused by the head movement during the experiments. (a)
The MEG machine and the setup of camera calibration. (b) The pattern for CAMTMEG
estimation. (c) The pattern for HEADTCAM estimation.
I21. Patent – I3 (Integrated, Interactive, and Immersive) Surveillance System
http://www.youtube.com/watch?v=LAcAkLDRIY0