Motive

Lead Engineer – Platform, Full-time

Motive, formerly KeepTruckin, is an industry leader in empowering the people who run the physical economy.

Wael, co-founder of Trip In Tech, led key engineering efforts at Motive for five years (2018-2023). His focus as a Lead Platform Engineer included security, infrastructure, developer productivity, and embedded systems. See some of his notable projects below.

Building a High-Performance Development Ecosystem

As a co-founder of Trip In Tech, my passion for optimizing developer workflows stems from my experience leading Motive’s developer productivity team. I was responsible for enhancing the development process across diverse platforms, including backend, frontend, mobile, and embedded systems.

To achieve this, I spearheaded two major initiatives:

  • Migrating to a Go-based Microservices Architecture: Recognizing the limitations of the existing Ruby-based system, I led the transition to Go. This shift resulted in significant performance improvements, increased scalability, and enhanced maintainability of the backend services.

  • Creating a Nix-Powered Mono-repository: I implemented a cutting-edge mono-repository, leveraging Nix for efficient dependency management and Bazel for streamlined build and deployment automation. This approach further boosted developer efficiency and collaboration across all platforms.

This combination of strategic technology choices and a focus on developer experience significantly improved productivity and paved the way for faster innovation at Motive.

Modernizing Deployments: From Legacy Systems to Kubernetes

At Motive, I spearheaded the critical initiative of migrating our applications from a legacy deployment system based on redundant Git repositories and manual deployments to EC2 instances to a modern, containerized infrastructure orchestrated by Kubernetes. This transition was essential to improve scalability, reliability, and developer agility.

This complex project involved:

  • Containerizing Applications: We packaged our applications and their dependencies into Docker containers, ensuring consistency across different environments and simplifying deployment.

  • Building a Kubernetes Cluster: We designed and deployed a robust Kubernetes cluster on AWS, providing a scalable and self-healing platform for our containerized applications.

  • Automating Deployments: We implemented CI/CD pipelines to automate the build, testing, and deployment processes, significantly reducing manual effort and increasing deployment frequency.

This migration to Kubernetes resulted in:

  • Increased Scalability and Reliability: Kubernetes enabled us to dynamically scale our applications based on demand and ensured high availability through self-healing capabilities.

  • Improved Developer Efficiency: Automated deployments and a streamlined infrastructure allowed developers to focus on building and shipping features faster.

  • Reduced Operational Overhead: Kubernetes simplified infrastructure management, reducing the time and resources required for maintenance and troubleshooting.

This successful modernization project demonstrates my ability to lead complex infrastructure transformations, delivering significant improvements in efficiency, scalability, and reliability.

Building a Robust Data Pipeline with Kafka and Debezium

At Motive, I recognized the need for a more efficient and scalable solution for data processing and integration between our various services. To address this, I led the adoption of Apache Kafka as our central data pipeline. This initiative involved:

  • Evaluating and Selecting a Kafka Vendor: To ensure optimal performance and reliability, we conducted a thorough evaluation of different Kafka vendors, considering factors such as scalability, security, and cost-effectiveness. Based on our assessment, we selected a vendor that best met our requirements.

  • Integrating Kafka with Existing Services: We seamlessly integrated the chosen Kafka service with our existing applications, enabling them to publish and consume data efficiently.

  • Leveraging Kafka Connect and Debezium: To further enhance our data infrastructure, I introduced Kafka Connect and Debezium. This allowed us to capture changes from our PostgreSQL databases in real-time and replicate them to Snowflake, our data warehouse, for analytics and reporting.

This Kafka-based data pipeline delivered significant benefits:

  • Improved Data Processing Efficiency: Kafka’s high-throughput architecture enabled us to process large volumes of data with low latency.

  • Enhanced Scalability and Reliability: Leveraging a managed Kafka service provided enhanced scalability and fault tolerance, guaranteeing reliable data delivery.

  • Real-time Data Integration: Debezium’s change data capture capabilities enabled real-time data replication to Snowflake, empowering data-driven decision-making.

This project showcases my ability to design and implement robust data pipelines, leveraging cutting-edge technologies like Kafka and Debezium to improve data processing, integration, and analysis.