Ishan
Madhusanka
Results-driven Senior Full-Stack Developer with 8+ years’ experience building scalable, user-friendly applications on web with proven ability to deliver complex integrations and drive innovation across the full development lifecycle.
Blog
Building Near Real-time Multiplayer Cursors: Addressing Latency and Smoothness
January, 2025Introduction
Creating real-time multiplayer experiences presents unique challenges, especially when precise synchronization and smooth animations are crucial. In this article, I'll delve into the development of a system for displaying live cursors (mouse pointers) of multiple players on a shared board. Having previously experimented with similar concepts in less demanding environments, I faced the task of building a robust and accurate solution for a real-world application. This project, while concise, offered valuable insights into tackling latency, event management, and animation smoothing.
The Challenge: Real-time Cursor Synchronization
My previous attempts at implementing real-time cursor sharing were primarily proof-of-concept projects with relaxed requirements. This new project demanded a production-ready solution, requiring careful consideration of network latency, event handling, and rendering. The core challenge was ensuring a smooth and accurate representation of each player's cursor movements across all clients, even under varying network conditions.
Addressing Latency
Network latency is an inherent issue in real-time applications. If events (cursor position updates) are sent infrequently, the resulting cursor movement appears jerky and discontinuous, especially with latencies exceeding 50ms. Simply updating the cursor position based on each received event is insufficient. In prior projects, I used basic tweening (linear interpolation) between event positions to smooth the motion. However, this approach has a critical flaw: if intermediate events are lost due to network issues, the cursor's path becomes inaccurate.
To address this, I implemented a more robust event management system. This involved:
- Event Matching and Prioritization: Ensuring that the most up-to-date events are sent to and processed by the server and subsequently broadcast to other clients. This minimized the impact of out-of-order or duplicate events.
- Batching Events: Grouping multiple cursor position updates into batches before sending them to the server. This reduces network overhead and improves efficiency, especially under high event frequency.
Replaying Cursor Movements with Interpolation
Batching events introduces a new challenge: how to accurately render the cursor's path on receiving clients. The solution was to implement a replay mechanism that reconstructs the cursor's movement based on the received batch of events. This involved:
- Position History: Maintaining a history of cursor positions for each player.
- Interpolation: Using different interpolation techniques based on the number of points available:
- Linear Interpolation (2 points): For simple movements between two positions.
- Quadratic Interpolation (3 points): For smoother curves with three positions.
- Centripetal Catmull–Rom Spline (More than 3 points): For complex paths requiring smooth and continuous curves. This method is particularly effective at preserving the shape of the original path, even with sparse data points.
This combination of position history and interpolation allowed for accurate and smooth cursor movement, regardless of the event frequency or network latency.
Handling Out-of-Order Events
Despite the improvements, I still encountered occasional glitches in the animation. After investigation, I discovered that some older events were arriving after newer ones due to network delays. To resolve this, I implemented a simple timestamp check:
- Timestamp Verification: Each event is timestamped on the client before being sent to the server and retained on the server. The server and receiving clients discard any events with timestamps older than the latest received timestamp for a particular player.
This simple fix eliminated most of the remaining animation artifacts and significantly improved the smoothness of the cursor movements.
Fine-Tuning the Animation
Finally, I added a subtle touch to further enhance the visual smoothness: micro-tweening. Instead of directly updating the cursor position to the target position, I implemented a gradual approach:
- Partial Updates: When the cursor is moving, the position is updated by 50% towards the target in each frame. When the cursor stops moving (no new events received for a short period), the position is updated by 80% towards the target.
This technique creates a subtle "easing out" effect, reducing the perceived digital nature of the movement and providing a more natural and responsive feel.
Conclusion
Building a real-time multiplayer cursor system requires careful consideration of various factors, particularly network latency and animation smoothness. By implementing event batching, intelligent interpolation, timestamp verification, and micro-tweening, I was able to create a robust and performant solution that provides a seamless and accurate real-time cursor sharing experience. This project highlighted the importance of understanding network behavior and employing appropriate techniques to mitigate its impact on user experience.
How I Optimized a File Generation Service from 5 minutes to 2 seconds
December, 2017Introduction
Imagine waiting 5 minutes for a simple Excel report to generate. Frustrating, right? This was the reality I faced when working with a legacy system that was tasked with producing these reports. Determined to improve this sluggish process, I embarked on a journey of optimization that ultimately transformed the system's performance from a snail's pace to a cheetah's sprint.
The Problem
The legacy system, built using NodeJS and relying on external libraries to generate Excel files, was struggling to keep up with the demand. Even for relatively small reports, consisting of just 10 columns and 100 rows, the generation time was a staggering 5 minutes. This bottleneck was not only frustrating for users but also consuming excessive system resources.
The Evaluation
Initially, I explored the possibility of optimizing the system by switching to different libraries. Python libraries, known for their efficiency, were among the first options I considered. While they did offer some improvement, the generation time still hovered around 2 minutes, far from satisfactory.
It became evident that the libraries themselves were not the root of the problem. After a thorough investigation, I decided to take a more radical approach: reverse engineering the Excel files to understand their underlying structure. This seemed like a daunting task, but the potential benefits outweighed the risks.
The Breakthrough
By dissecting the Excel files, I gained valuable insights into their composition and formatting. This knowledge allowed me to develop a custom solution that bypassed the limitations of the existing libraries. One of the key breakthroughs was the implementation of streaming, which enabled the system to process and write data to the Excel file in smaller chunks, significantly reducing memory consumption.
The Technology Shift
While the custom solution provided a substantial performance boost, I was still eager to explore other possibilities. I experimented with Golang and Rust, two languages renowned for their speed and efficiency. Golang, with its simplicity and concurrency features, showed promising results, reducing the generation time to around 8 seconds. However, its type system, which I found less intuitive at the time, was a minor drawback.
Rust, on the other hand, offered a compelling combination of performance and type safety. Its strict type checking helped me avoid common pitfalls and write more reliable code. Despite the initial learning curve, I was impressed by the speed and efficiency it brought to the table.
The Results
Ultimately, I decided to leverage the power of shell scripting to create the final solution. By combining bash with NodeJS or Lua for specific tasks, I was able to build a highly efficient system that utilized C binaries for reading and manipulating files. This approach resulted in a dramatic reduction in generation time, bringing it down to a mere 2 seconds.
Conclusion
The journey to optimize the Excel generation process was a valuable learning experience. It taught me the importance of understanding the underlying problem, exploring unconventional solutions, and leveraging the right tools for the job. By combining reverse engineering, streaming, and strategic technology choices, I was able to transform a sluggish system into a high-performance powerhouse.
Using Local Storage and Session Storage to Gracefully Handle Timeouts
September, 2023Understanding the Problem
When a user is inactive for a prolonged period, it's often necessary to automatically log them out for security reasons.
However, this can lead to a frustrating user experience if they're suddenly logged out without a clear explanation and
especially when multiple open tabs are involved. To
mitigate this, we can leverage the power of localStorage
and sessionStorage
to display a consistent message across
multiple tabs and sessions.
Why localStorage and sessionStorage?
When multiple tabs are open, we need to ensure seamless communication between them within the current browser session. To achieve this, we'll utilize sessionStorage, a web storage mechanism that persists data only for the duration of a single browser session. By employing sessionStorage, we can effectively manage and coordinate state across various tabs, guaranteeing that messages related to expired activity periods are displayed appropriately and only within the current session.
localStorage
- Persistent storage that remains across browser sessions.
- Ideal for scenarios where data needs to be shared between multiple tabs or devices.
- In our case, we'll use it to trigger the logout message on all open tabs.
sessionStorage
- Temporary storage that's cleared when the browser tab or window is closed.
- Perfect for storing the logout message within a specific session.
- Once the user closes the browser or logs back in, the message is removed.
Implementation Strategy
1. Timeout Detection and localStorage Update
- When the timeout threshold is reached, set a flag in
localStorage
to indicate the logout. - This flag acts as a trigger for all open tabs.
2. Session Storage Synchronization
- On page load or refresh, check the
localStorage
flag. - If the flag is set, store a value in
sessionStorage
to display the logout message. - This ensures that the message is shown only within the current session.
3. Message Display
- Use JavaScript to check the
sessionStorage
value and display the logout message if it exists. - The message can be shown as a modal, alert, or a persistent notification.
4. Clearing the Flags
- Once the user logs back in or closes the browser, clear both the
localStorage
andsessionStorage
flags to prevent the message from appearing unnecessarily.
Key Considerations
- Security: Ensure that sensitive information isn't stored in
localStorage
. - User Experience: Design the logout message to be informative and clear.
- Browser Compatibility: Test your implementation across different browsers to ensure consistent behavior.
- Edge Cases: Consider scenarios like multiple devices, browser refreshes, and network interruptions.
By carefully combining localStorage
and sessionStorage
, we can provide a seamless and user-friendly experience, even
in the face of timeouts due to a certain period of inactivity.
The Firsts
Stats
Work History
Tech Lead
- Architected a Presentation–Abstraction–Control (PAC) architecture, decoupling business logic (utilizing hooks) from UI components, leveraging React and TypeScript to enhance maintainability and scalability.
- Led the front-end development of the unified SPH Ad Portal experience, utilizing React and TypeScript to deliver a modern, responsive user interface and streamline the user experience.
- Introduced an in-house framework for feature flags, enabling controlled, feature-flagged deployments through a multi-layered approach – including manual updates for internal users, configuration via environment variables, and leveraging system defaults – to facilitate rapid iteration and risk mitigation.
- Significantly reduced deployment time by 70% by implementing optimized CI/CD pipelines, leveraging a Rust-based toolchain to dramatically improve pipeline performance and accelerate release cycles.
- Mentored a team of engineers, establishing coding standards and best practices that improved code quality and reduced technical debt, fostering a culture of open communication, collaboration and continuous improvement.
Engineering Consultant
- Designed and developed a bespoke no-code workflow automation platform for the Operations team, leveraging automation to streamline processes and significantly reduce manual effort.
- Drove a 40-hour monthly reduction in average operational workload per employee through the implementation of automated workflows – directly impacting team efficiency and productivity.
- Championed a shift to 100% unit testing coverage across the Backend codebase, enabling confident refactoring and feature addition while mitigating technical risk.
- Implemented comprehensive testing strategies – including component and end-to-end testing – on the Frontend codebase, resulting in a demonstrable reduction in production bugs and improved application stability.
- Accelerated deployments by 70% through the implementation of AWS CDK-based automated deployments, achieving a consistent deployment time of less than 4 minutes.
Senior Tech Lead
- Spearheaded the successful integration of AUD functionalities onto the Railsbank platform, leading two integration teams to deliver a fully functional solution within four months – demonstrating rapid delivery and strategic execution.
- Led cross-functional, multi-regional teams to deliver three new partner products, ensuring alignment with Railsbank standards and driving innovation through seamless KYC integrations.
- Streamlined release processes by developing and introducing automated data migrations with approval workflows, reducing release approval time from multiple weeks to a single hour – significantly improving operational efficiency.
- Contributed the development of the integration layer using serverless technologies on AWS, while managing complex integrations with third-party providers, including robust incident management and proactive communication strategies.
- Actively participated in the migration of a monolithic architecture to a microservice-based design utilizing a Strangler Fig pattern, demonstrating architectural leadership and a commitment to scalable solutions.
- Managed and optimized asynchronous communication patterns utilizing circuit breakers, queues, and replay mechanisms to ensure resilience and reliability in the face of third-party service disruptions.
Full-stack Engineer
Senior Software Engineer
Freelance Developer
Software Engineer
Senior UI/UX developer
User Experience Designer
Freelancer
Projects






































