Large-scale spatial clustering
Algorithms designed to cluster very large spatial datasets efficiently (near-linear behavior in practice), supporting robust behavior across different point distributions.
My work focuses on building efficient intelligent systems by combining algorithms, hardware, and real-world data. This page gives a curated overview; detailed project reports and code are linked in the Archive.
Many real problems become infeasible when complexity grows. A recurring theme in my work is to represent and solve large problems using sparsity, hierarchy, and validation-driven reduction.
Algorithms designed to cluster very large spatial datasets efficiently (near-linear behavior in practice), supporting robust behavior across different point distributions.
Step-by-step reduction of complex systems with continuous validation, to preserve correctness while simplifying representations.
Alternatives to fully-connected associative memory using optimized partial connectivity to scale to larger sizes.
Designs focused on minimizing compute, memory, and power while maintaining scientific usefulness and repeatable performance.
My neural network work spans foundational models, learning rules, and tool-building—aimed at systems that are interpretable, scalable, and hardware-friendly.
Probabilistic neuron behavior inspired by biological firing, along with training rules and comparisons to classical activation functions.
Sparse associative memory networks as a practical alternative to fully connected Hopfield-style models.
Tooling for building and experimenting with neural networks and pattern recognition systems—built for research productivity and reproducibility.
Architectures where pre-trained networks operate as computational units within a larger adaptive system (a “net-of-nets” concept).
Methods to reduce network size while preserving function—supporting efficient deployment and analysis.
Neural designs and learning rules that consider practical implementation constraints from the beginning.
Building instruments and measurement systems that extract useful information from signals—combining electronics, signal processing, and AI.
Reflective light microscope workflow for 3D reconstruction using focus stacking and intelligent focus/defocus detection.
Improving counting efficiency in noisy conditions using noise-signature discrimination and anti-coincidence strategies.
Embedded modules and FPGA-oriented designs for scientific payload applications with strict constraints.
Low-power sensing, data logging, and analysis pipelines to turn raw signals into actionable knowledge.
Hybrid knowledge + neural approaches for extracting patterns from biological sequences and large biological databases.
Combining knowledge-base properties and neural networks for improved prediction and analysis.
Tools and workflows to analyze and organize large biological datasets for research tasks.
Practical utilities for managing, extracting, and interpreting biological datasets and sequence properties.
Planetary science software and analysis tools for terrain visualization, crater-related workflows, and astro-material analysis.
Modeling and analysis tools supporting crater identification and classification tasks.
Visualization systems for exploring lunar and Martian surfaces, including stereo/anaglyph style browsing.
Classification and analysis workflows for planetary material and rock-type datasets.
Tools intended to make planetary surface browsing more accessible for research and education.
Designing intelligent systems close to sensors—where power, memory, latency, and reliability matter as much as accuracy.
RMS measurement, filtering, and compression methods suitable for microcontrollers and long-running deployments.
Distributed measurement and data fusion ideas that convert local observations into global models.
Modular compute nodes and architecture thinking for scalable, affordable AI experimentation.