Dr Frederik Mallmann-Trenn Dr Frederik Mallmann-Trenn Academics Supervisors Head of the Algorithms and Data Analysis Research Group Senior Lecturer in Computer Science (Data Science) . Research subject areas Computer science Contact details +44 (0)207 848 2683 frederik.mallmann-trenn@kcl.ac.uk
Tight Asymptotics of Extreme Order Statistics Modeling Feasible Locomotion of Nanobots for Cancer Detection and Treatment quasi-Dynamic Crowd Vetting: Collaborative Detection of Malicious Robots in Dynamic Communication Networks IID Prophet Inequality with Random Horizon: Going Beyond Increasing Hazard Rates Voter Model Meets Rumour Spreading: A Study of Consensus Protocols on Graphs with Agnostic Nodes [Extended Version] Trading-off Accuracy and Communication Cost in Federated Learning On the Sparsity of the Strong Lottery Ticket Hypothesis Harmonizing vs Polarizing Platform Influence Functions Asynchronous 3-Majority Dynamics with Many Opinions Sorting in One and Two Rounds using t-Comparators Community Consensus: Converging Locally despite Adversaries and Heterogeneous Connectivity Dynamic Crowd Vetting: Collaborative Detection of Malicious Robots in Dynamic Communication Networks On coalescence time in graphs–When is coalescing as fast as meeting? Distributed Averaging in Opinion Dynamics Learning Hierarchically-Structured Concepts II: Overlapping Concepts, and Networks With Feedback On Early Extinction and the Effect of Travelling in the SIR Mode Community Recovery in the Degree-Heterogeneous Stochastic Block Model A Massively Parallel Modularity-Maximizing Algorithm With Provable Guarantees Online Page Migration with ML Advice Learning Hierarchically-Structured Concepts Crowd Vetting: Rejecting Adversaries via Collaboration With Application to Multirobot Flocking On the power of Louvain in the stochastic block model Crowd Vetting: Rejecting Adversaries via Collaboration with Application to Multi-Robot Flocking Diversity, Fairness and Sustainability in Population Protocols Bayes Bots: Collective Bayesian Decision-Making in Decentralized Robot Swarms Skyline Computation with Noisy Comparisons Self-Stabilizing Task Allocation in Spite of Noise Instance-Optimality in the Noisy Value-and Comparison-Model: Accept, Accept, Strong Accept: Which Papers get in? Threshold load balancing with weighted tasks Self-Stabilizing Balls and Bins in Batches: The Power of Leaky Bins Eigenvector computation and community detection in asynchronous gossip models Improved analysis of deterministic load-balancing schemes Clustering redemption-beyond the impossibility of Kleinberg's axioms On coalescence time in graphs: When is coalescing as fast as meeting? Noidy conmunixatipn: On the convergence of the averaging population protocol Hierarchical clustering: Objective functions and algorithms How to color a french flag: Biologically inspired algorithms for scale-invariant patterning How to spread a rumor: Call your neighbors or take a walk? View all publications
23 January 2025 New approach to training AI could significantly reduce time and energy involved Pruning neural networks during training could make AI more efficient, sustainable and accessible