"The goal is to turn data into information, and information into insight."
— Carly Fiorina



Virtual Keyboard
Project Description:
Technologies: Python, OpenCV, NumPy, PyAutoGUI
Human-computer interaction inspired me to design a project where a person could type without touching a keyboard. Using a webcam feed, I developed a gesture recognition system with OpenCV that detected hand contours and mapped them to keyboard events.
The pipeline included image preprocessing, contour detection, and finger-tracking logic. Once gestures were recognized, I used PyAutoGUI to simulate keystrokes on the system. This created a functioning virtual keyboard controlled entirely by hand movements.
While experimental, the project showcases how computer vision techniques can be applied to accessibility solutions and innovative interfaces.
Skills Showcase:
Real-time computer vision with OpenCV
Gesture recognition using contour and feature detection
Mapping gestures to simulated keystrokes with PyAutoGUI
Python modules:
Pickle: To serialize the data from the image objects.
Argparse: To check and use command line arguments.
Pyautogui: To draw and show the virtual keyboard on the screen.
Image preprocessing and mask creation for robustness
Applied HCI concepts to accessibility design
Key Insights:
Webcam input can reliably substitute for hardware input in controlled settings
Hand contour recognition proved more robust than color thresholding
Prototype demonstrated feasibility of accessibility-oriented HCI solutions



