
Github Status
About
Hi, I am Zheng-Yan Wu.
I am currently a master student in National Taiwan University.
I am currently learning SLAM, 2D & 3D Computer Vision and Reinforcement Learning.
Education
- Taichung First Senior High School (2014 / 9 ~ 2017 / 6)
- National Taiwan Normal University (2017 / 9 ~ 2021 / 6)
- Main Major -
Mechatronic Engineering
- Double Major -
Eletrical Engineering
- Main Major -
- National Taiwan University (2021 / 9 ~ Now)
- Graduate Institute of Mechanical Engineering
- Research Scholar at NTU Robotic Lab
Teaching
- Teaching Assistant at the NTNU ME
Computer Programming
course in 2019 - Teaching Assistant at the NTNU ME
Labs of Digital Logic
course in 2019 - Teaching Assistant at the NTNU ME
The Principles and Application of Sensors
course in 2020
University-Industry cooperation
- Taiwan Industrial Technology Research Institute Intelligence Robot Talent Education (2019 / 3 ~ 2019 / 11)
- Stamping Machinery Intelligent Sensing Industry-University Project Internship (2019 / 7 ~ 2019 / 8)
Special Topic
- A CNN-based Sleep Posture Recognition System Using a Pressure-Sensitive Mattress
- Nonlinear Signal Prediction and Causality Analysis Based on Granger Causality with Implementation of CNN-BiLSTM Model
Language
- Native: Mandarin, Taiwanese
- Proficient: English
- Basic: Japanese
Skills
- Proficient:
C
,C++
,Python
- Intermediate:
C#
,HTML5
Featured Projects
Microsoft - Light-Weight Facial Landmark Prediction Challenge
We train a model to predict 68 2D facial landmarks in a single cropped face image with high accuracy, high efficiency, and low computational costs.


Point Cloud Semantic Completion
3D point cloud object completion is to restore broken and incomplete 3D point cloud objects to their complete appearance. We use the information about the semantic parts of these broken objects to be added to effectively improve the quality of the completed object, including the integrity of the whole and the delicacy of the details.


Play Table Hockey with a 5-DoF Mechanical Arm
The dynamic hockey ball is tracked through the Realsense d435i RGB-D camera. When the ball enters the area that the robot arm can reach, the inverse kinematics is solved through the ball coordinates predicted by the Kalman Filter, and finally the robot arm is moved to the position.