AiVis: AI-Powered Industrial Quality Inspection System
As an engineer, I had the opportunity to work on a cutting-edge project called AiVis, an AI-powered industrial quality inspection system developed by AiFactory. AiVis was designed as an integrated hardware and software solution, aimed at automating the process of component handling, inference, and sorting in industrial environments.
One of the core features of AiVis is its ability to accurately identify and sort good components from defective ones. Leveraging advanced AI algorithms, AiVis is trained to detect a wide range of defects, including cracks, dents, holes, shear, tears, and more, on components. This enables manufacturers to quickly and reliably identify and discard defective components, ensuring that only high-quality products make it to the market.
In addition to defect detection, AiVis also has the capability to detect the presence or absence of components in an assembly, adding an extra layer of quality control to the manufacturing process.
Throughout the project, I was responsible for designing and implementing critical components of the system, including the integration of hardware and software, customization of AI algorithms, and optimizing the system for real-time performance. I worked closely with my team at AiFactory to ensure that AiVis met the specific requirements of our clients and delivered reliable and accurate results.
Working on the AiVis project has been a fulfilling experience, allowing me to apply my engineering skills to create an innovative solution that has the potential to revolutionize quality inspection in industrial settings. I am proud to have been part of this project and to have contributed to its success.
VisionSense: Universal AI-Powered Vision Tool
As an engineer, I played a pivotal role in developing a comprehensive backend for a universal vision tool that empowers users to seamlessly add cameras and select AI-related functionalities to extract valuable data. One of the key functionalities of this tool is its ability to identify people in both photo and video data, making it applicable across various domains. One of the primary use cases we focused on was counting people in designated zones and controlling access to hazardous areas, where the system proved to be highly effective.
My contribution to this project encompassed the entire backend architecture, ensuring robustness, reliability, and seamless integration with the frontend. I spearheaded the development of the backend functionalities, including data processing, camera integration, and AI model integration. My efforts were instrumental in creating a powerful vision tool that offers unparalleled capabilities for capturing and processing visual data, enabling businesses to enhance safety measures, improve decision-making, and optimize operations.
StreamQuicker: Web based Real-Time universal Streaming Platform
StreamQuicker is a cutting-edge web-based streaming platform that I led the development of, utilizing WebRTC technology to capture real-time camera and microphone input. As the backend architect and lead developer, I implemented advanced transcoding functionality using industry-standard codecs such as H.264 and AAC, and integrated streaming protocols like RTMP and HLS to seamlessly deliver media content to social media platforms like YouTube and Facebook. I also implemented adaptive streaming, user authentication and authorization using OAuth and JWT, and optimized the system for performance and scalability with caching, load balancing, and distributed processing.
This project required in-depth expertise in multimedia technologies, backend development, streaming protocols, transcoding, and security practices. StreamQuicker empowers users to effortlessly stream their content to multiple social media platforms, providing a seamless and secure streaming experience for end-users.
SafePlant: ML-Based Factory Workers Safety System
I led the development of a plant safety system that utilized machine learning and computer vision technologies to analyze plant images and videos for potential safety hazards. The system was built using TensorFlow for machine learning model training and Flask for the backend web service. Websockets were implemented for real-time communication between the front-end and back-end components of the system.
Technologies I know -
- Configuration Management:
- Containerization and Orchestration:
- Continuous Integration/Continuous Delivery (CI/CD):
- GitLab CI/CD
- Travis CI
- Infrastructure as Code (IaC):
- Cloud Platforms:
- Amazon Web Services (AWS)
- Microsoft Azure
- Google Cloud Platform (GCP)
- Version Control Systems:
- Monitoring and Logging:
- ELK Stack (Elasticsearch, Logstash, Kibana)