Streamlining Car Listings with Image Recognition

This project started out as a random idea when my group was brainstorming for what to base our senior capstone project on. Originally, we were planning to build a site for housing listings, something that could auto-detect what kind of room was shown in a photo. You’d be able to upload a few pics of your apartment or house, and the app would figure out which photo was which and organize them automatically on the listing page. It sounded cool in theory, but the dataset was a mess, and the accuracy wasn’t where we needed it to be.

So we pivoted, and instead of trying to recognize rooms we leaned into something more familiar, cars. The general idea was still the same: take a user-uploaded image and use it to identify something automatically. But instead of “this is a bathroom,” it became “this is a 2016 Honda Accord.” That shift unlocked a way more useful workflow and the rest of the project fell into place. We ended up building a fullstack web app where users could upload photos of their car, have the make and model predicted instantly, and create a clean marketplace listing with minimal effort.

We built the frontend using React. Users can browse listings, filter by price or body type, and upload their own vehicles. The main interaction, uploading a car image, was tied into a drag-and-drop component that immediately previewed the image and kicked off the recognition process behind the scenes. Once the prediction came back, the input fields would auto-fill with the make and model, and users could tweak anything that didn’t look right before submitting. We kept the design clean and simple, and took a page from the simple visual design style of Craigslist.

The backend was built with Node.js and SQLite mainly to keep things lightweight and easy to manage. We didn’t need a massive cloud setup, so SQLite was sufficient for storing listings, running queries for search filters, and connecting to the model. The image classification used TensorFlow.js trained on a large dataset of most common vehicles. We didn’t try to go too deep into details like trim levels or engine types, just the make and model which already gave us solid utility. Once an image was uploaded, it was processed through the model and returned as a prediction with confidence scores. If it cleared a certain threshold, the form would be pre-filled. If not, we asked the user to input it manually.

The end result wasn’t the most polished product due to us running into time constrains and having issues training the model. However it was a good proof of concept and taught me a lot about working with image data, bridging machine learning into a live web workflow, and thinking through the user flow of something that relies on automation but still needs human correction. It also pushed me to learn how to keep everything modular and testable, especially when working in a group where each person touched different layers of the stack. I wouldn’t ship it today, but it’s one of those projects that made me realize how small tools, even experimental ones, can solve real friction points.

Posted in

Ivan Aleksandrov

I write about mobile tech, WordPress platforms, and digital UX. I’m currently working with AppPresser to help organizations improve how they deliver content through apps and websites.

Leave a Comment