This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Portfolio

Showcasing Trip in Tech’s expertise and successful projects.

In today’s dynamic digital landscape, businesses need a robust and scalable cloud infrastructure to thrive. Trip in Tech partners with businesses to navigate the complexities of the cloud, specializing in the design, build, and management of high-performance solutions tailored to unique needs. Our portfolio showcases how we leverage powerful tools like Terraform for infrastructure automation and Kubernetes for container orchestration, ensuring applications run smoothly and efficiently. We utilize Go, a cutting-edge programming language renowned for its efficiency and concurrency, to develop robust and scalable backend systems that power core business functions. Whether it’s a startup seeking a solid foundation or an established enterprise optimizing its infrastructure, Trip in Tech empowers clients to harness the full potential of the cloud.

1 - Canary Technologies

Lead Engineer – Platform, Full-time

Canary Technologies is an industry leader in hospitality management systems.

Strengthening Security and Access Control at Canary Technologies

Canary Technologies prioritized enhancing the security of their AWS environment and controlling access to critical resources. Trip In Tech co-founder, Wael, was engaged to implement a robust access control system and ensure comprehensive auditability. His key contributions included:

Implementing Secure Authentication with Google SSO

To eliminate the security risks associated with managing individual AWS credentials, Wael:

  • Integrated Google SSO for AWS Access: Configured single sign-on (SSO) using Google Workspace, allowing employees to access AWS resources securely with their existing Google accounts.

  • Enforced Read-Only Access for Most Users: Implemented policies to grant read-only access to the majority of users, preventing unintended modifications to critical infrastructure and data.

  • Established Granular Access Control for Specific Roles: Defined specific roles with appropriate permissions for administrators and on-call engineers, ensuring they had the necessary access to perform their duties.

This transition to Google SSO significantly improved Canary Technologies' security posture by centralizing authentication and enforcing least privilege access.

Securing and Auditing Django Shell Access

To further enhance security and control access to sensitive data within their ECS-based applications, Wael:

  • Implemented Role-Based Access Control for Django Shell: Configured role-based access control for the Django shell, restricting access based on user roles.

  • Enforced Read-Only Access for Standard Users: Granted standard users read-only access to the Django shell, preventing accidental or unauthorized modifications to production data.

  • Granted Read-Write Access to Authorized Personnel: Granted administrators and on-call engineers read-write access to the Django shell when necessary, enabling them to perform critical tasks and troubleshoot issues.

  • Enabled Comprehensive Audit Logging for Django Shell: Implemented audit logging to track all Django shell activity, including user logins, commands executed, and data accessed. This ensured accountability and provided valuable insights for security monitoring and analysis.

This granular access control mechanism, coupled with comprehensive audit logging, ensured that sensitive data was protected while still allowing authorized personnel to perform their required tasks.

2 - ArkLight

Site Reliability Engineer, Contract

ArkLight Security software provides active cyber security protection against the most common threats along with comprehensive cyber insurance coverage.

Building Scalable Infrastructure on Azure and Providing Go Expertise at ArkLight

During Trip In Tech’s engagement with ArkLight as a contractor, we contributed to two key areas:

Developing Azure Infrastructure with Terraform

We were responsible for building and managing scalable infrastructure on Microsoft Azure using Terraform. This involved:

  • Designing and Implementing Infrastructure as Code: We leveraged Terraform to define and provision various Azure resources, including virtual machines, networking components, and storage solutions.

  • Automating Infrastructure Management: We implemented automation to streamline infrastructure provisioning, updates, and management, ensuring consistency and efficiency.

  • Ensuring Infrastructure Security and Compliance: We incorporated security best practices and compliance requirements into the infrastructure design and implementation.

This work enabled ArkLight to efficiently deploy and manage their applications on Azure, ensuring scalability, reliability, and security.

Accelerating Development with Go Code Reviews

As Go experts, we played a crucial role in accelerating ArkLight’s development process by providing timely and thorough code reviews. This involved:

  • Reviewing Go Code for Quality and Best Practices: We reviewed pull requests to ensure code quality, adherence to best practices, and efficient implementation.

  • Providing Constructive Feedback and Guidance: We offered constructive feedback to developers, helping them improve their code and learn new techniques.

  • Facilitating Faster Code Merging: Our timely reviews helped expedite the code review process, enabling faster integration of new features and bug fixes.

Our contributions in this area helped ArkLight maintain a high standard of code quality and accelerate their development velocity.

3 - Motive

Lead Engineer – Platform, Full-time

Motive, formerly KeepTruckin, is an industry leader in empowering the people who run the physical economy.

Wael, co-founder of Trip In Tech, led key engineering efforts at Motive for five years (2018-2023). His focus as a Lead Platform Engineer included security, infrastructure, developer productivity, and embedded systems. See some of his notable projects below.

Building a High-Performance Development Ecosystem

As a co-founder of Trip In Tech, my passion for optimizing developer workflows stems from my experience leading Motive’s developer productivity team. I was responsible for enhancing the development process across diverse platforms, including backend, frontend, mobile, and embedded systems.

To achieve this, I spearheaded two major initiatives:

  • Migrating to a Go-based Microservices Architecture: Recognizing the limitations of the existing Ruby-based system, I led the transition to Go. This shift resulted in significant performance improvements, increased scalability, and enhanced maintainability of the backend services.

  • Creating a Nix-Powered Mono-repository: I implemented a cutting-edge mono-repository, leveraging Nix for efficient dependency management and Bazel for streamlined build and deployment automation. This approach further boosted developer efficiency and collaboration across all platforms.

This combination of strategic technology choices and a focus on developer experience significantly improved productivity and paved the way for faster innovation at Motive.

Modernizing Deployments: From Legacy Systems to Kubernetes

At Motive, I spearheaded the critical initiative of migrating our applications from a legacy deployment system based on redundant Git repositories and manual deployments to EC2 instances to a modern, containerized infrastructure orchestrated by Kubernetes. This transition was essential to improve scalability, reliability, and developer agility.

This complex project involved:

  • Containerizing Applications: We packaged our applications and their dependencies into Docker containers, ensuring consistency across different environments and simplifying deployment.

  • Building a Kubernetes Cluster: We designed and deployed a robust Kubernetes cluster on AWS, providing a scalable and self-healing platform for our containerized applications.

  • Automating Deployments: We implemented CI/CD pipelines to automate the build, testing, and deployment processes, significantly reducing manual effort and increasing deployment frequency.

This migration to Kubernetes resulted in:

  • Increased Scalability and Reliability: Kubernetes enabled us to dynamically scale our applications based on demand and ensured high availability through self-healing capabilities.

  • Improved Developer Efficiency: Automated deployments and a streamlined infrastructure allowed developers to focus on building and shipping features faster.

  • Reduced Operational Overhead: Kubernetes simplified infrastructure management, reducing the time and resources required for maintenance and troubleshooting.

This successful modernization project demonstrates my ability to lead complex infrastructure transformations, delivering significant improvements in efficiency, scalability, and reliability.

Building a Robust Data Pipeline with Kafka and Debezium

At Motive, I recognized the need for a more efficient and scalable solution for data processing and integration between our various services. To address this, I led the adoption of Apache Kafka as our central data pipeline. This initiative involved:

  • Evaluating and Selecting a Kafka Vendor: To ensure optimal performance and reliability, we conducted a thorough evaluation of different Kafka vendors, considering factors such as scalability, security, and cost-effectiveness. Based on our assessment, we selected a vendor that best met our requirements.

  • Integrating Kafka with Existing Services: We seamlessly integrated the chosen Kafka service with our existing applications, enabling them to publish and consume data efficiently.

  • Leveraging Kafka Connect and Debezium: To further enhance our data infrastructure, I introduced Kafka Connect and Debezium. This allowed us to capture changes from our PostgreSQL databases in real-time and replicate them to Snowflake, our data warehouse, for analytics and reporting.

This Kafka-based data pipeline delivered significant benefits:

  • Improved Data Processing Efficiency: Kafka’s high-throughput architecture enabled us to process large volumes of data with low latency.

  • Enhanced Scalability and Reliability: Leveraging a managed Kafka service provided enhanced scalability and fault tolerance, guaranteeing reliable data delivery.

  • Real-time Data Integration: Debezium’s change data capture capabilities enabled real-time data replication to Snowflake, empowering data-driven decision-making.

This project showcases my ability to design and implement robust data pipelines, leveraging cutting-edge technologies like Kafka and Debezium to improve data processing, integration, and analysis.

4 - Linnaeus

Site Reliability Engineer, Contract

Linnaeus is an open source cybersecurity threat intelligence.

Enhancing Infrastructure Management and Security at Linnaeus

Linnaeus relied on AWS to power its infrastructure, and Trip In Tech was brought in to address key challenges related to infrastructure management, security, and access control. Our primary focus was on:

Implementing Infrastructure as Code with Terraform

Linnaeus needed to gain better control and visibility over its AWS infrastructure. To achieve this, we:

  • Introduced Terraform for Infrastructure Management: Implemented Terraform to define and manage all infrastructure components as code, enabling version control, collaboration, and automation.

  • Migrated Existing Infrastructure to Terraform: Meticulously migrated existing AWS resources to Terraform, ensuring a smooth transition and minimizing disruption.

  • Established Best Practices for Terraform Usage: Established clear guidelines and best practices for using Terraform, promoting consistency and maintainability across the infrastructure.

This transition to Infrastructure as Code provided Linnaeus with a more robust, manageable, and auditable infrastructure foundation.

Enhancing Security and Access Control

To improve security and streamline access management, we:

  • Separated Staging and Production Environments: Created separate AWS accounts for staging and production environments, enhancing security and preventing accidental modifications to production resources.

  • Implemented Role-Based Access Control (RBAC): Designed and implemented a granular RBAC system, limiting engineers’ access to only the resources they needed for their specific roles.

  • Enabled Detailed Audit Logging: Configured comprehensive audit logging to track all infrastructure changes and access, ensuring accountability and facilitating security analysis.

These measures significantly strengthened Linnaeus’s security posture and provided a more controlled and auditable environment for managing its AWS infrastructure.

5 - Publica

Co-founder, Full-time

Publica - The Connected TV Advertising Platform.

Building the Foundation for Publica

Trip In Tech co-founder, Wael, played a pivotal role in building Publica’s technology foundation from the ground up. As a co-founder of Publica, Wael was responsible for creating both the infrastructure and the backend systems that power Publica’s operations.

Designing and Implementing a Robust Infrastructure

To support Publica’s growth and scalability, Wael designed and implemented a comprehensive infrastructure on AWS, leveraging:

  • Kubernetes for Container Orchestration: Kubernetes was utilized to manage and orchestrate containerized applications, ensuring high availability, efficient resource utilization, and seamless scaling.

  • Terraform for Infrastructure as Code: Terraform was adopted to define and manage the infrastructure as code, enabling automation, version control, and collaboration.

  • Datadog for Monitoring and Observability: Datadog was integrated to provide comprehensive monitoring and observability across the infrastructure and applications, enabling proactive issue detection and resolution.

This robust and scalable infrastructure provided a solid foundation for Publica’s applications and services.

Developing a High-Performance Backend with Go and Kafka

To power Publica’s core functionality, Wael developed a high-performance backend system using:

  • Go for Efficient and Scalable Services: Go was chosen for its efficiency, concurrency, and suitability for building microservices, ensuring a robust and scalable backend.

  • Kafka for Real-time Data Streaming: Kafka was integrated to handle real-time data streaming between different services, enabling efficient data processing and communication.

This backend system provided the necessary performance and reliability to support Publica’s growing user base and evolving needs.

Wael’s contributions as a co-founder and lead infrastructure and backend engineer were essential in establishing Publica’s technology foundation and enabling its early success.