|
|
|
|
TeleLoRA: Teleporting Alignment across Large Language Models for Trojan Mitigation |
Xiao Lin, Manoj Acharya, Anirban Roy, Susmit Jha |
arxiv |
morning |
Collaborative Time Series Imputation through Meta-learned Implicit Neural Representations |
Tong Nie, Wei Ma |
|
morning |
Fusion of Graph Neural Networks via Optimal Transport |
Weronika Ormaniec, Michael Vollenweider, Elisa Hoskovec |
arxiv |
morning |
Finding Stable Subnetworks at Initialization with Dataset Distillation |
Luke McDermott, Rahul Parhi |
arxiv |
morning |
The Space Between: On Folding, Symmetries and Sampling |
Michal Lewandowski, Bernhard Heinzl, Raphael Pisoni, Bernhard A. Moser |
arxiv |
morning |
Uncovering Latent Chain of Thought Vectors in Large Language Models |
Jason Zhang, Scott W Viteri |
arxiv |
morning |
A Model Zoo of Vision Transformers |
Damian Falk, Léo Meynent, Florence Pfammatter, Konstantin Schürholt, Damian Borth |
|
morning |
Unveiling the Potential of Superexpressive Networks in Implicit Neural Representations |
Uvini Balasuriya Mudiyanselage, Woojin Cho, Minju Jo, Noseong Park, Kookjin Lee |
arxiv |
morning |
A Model Zoo on Phase Transitions in Neural Networks |
Konstantin Schürholt, Léo Meynent, Yefan Zhou, Yaoqing Yang, Damian Borth |
|
morning |
ARC: Anchored Representation Clouds for High-Resolution INR Classification |
Joost Luijmes, Alexander Gielisse, Roman Knyazhitskiy, Jan van Gemert |
arxiv |
morning |
Learning on LoRAs: GL-Equivariant Processing of Low-Rank Weight Spaces for Large Finetuned Models |
Theo Putterman, Derek Lim, Yoav Gelberg, Stefanie Jegelka, Haggai Maron |
|
morning |
Mimetic Initialization of MLPs |
Asher Trockman, J Zico Kolter |
|
morning |
Improving Learning to Optimize Using Parameter Symmetries |
Guy Zamir, Aryan Dokania, Bo Zhao, Rose Yu |
arxiv |
morning |
The Empirical Impact of Reducing Symmetries on the Performance of Deep Ensembles and MoE |
Andrei Chernov, Oleg Novitskij |
arxiv |
morning |
Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces |
Yihuai Hong, Lei Yu, Haiqin Yang, Shauli Ravfogel, Mor Geva |
|
morning |
GradMetaNet: An Equivariant Architecture for Learning on Gradients |
Yoav Gelberg, Yam Eitan, Aviv Navon, Aviv Shamsian, Theo Putterman, Haggai Maron |
|
morning |
A Single Global Merging Suffices: Recovering Centralized Learning Performance in Decentralized Learning |
Tongtian Zhu, Tianyu Zhang, Mingze Wang, Zhanpeng Zhou, Can Wang |
|
morning |
On Symmetries in Convolutional Weights |
Bilal Alsallakh, Timothy J Wroge, Vivek Miglani, Narine Kokhlikyan |
arxiv |
morning |
Hyper-Align: Efficient Modality Alignment via Hypernetworks |
Jaisidh Singh, Diganta Misra, Boris Knyazev, Antonio Orvieto |
|
morning |
Recursive Self-Similarity in Deep Weight Spaces of Neural Architectures: A Fractal and Coarse Geometry Perspective |
Ambarish Moharil, Indika Kumara, Majid Mohammadi, Damian Andrew Tamburri, Willem-Jan van den Heuvel |
arxiv |
morning |
Cost-Efficient Continual Learning with Sufficient Exemplar Memory |
Dong Kyu Cho, Taesup Moon, Rumi Chunara, Kyunghyun Cho, Sungmin Cha |
arxiv |
morning |
Dataset Size Recovery from Fine-Tuned Model Weights |
Mohammad Salama, Jonathan Kahana, Eliahu Horwitz, Yedid Hoshen |
arxiv |
morning |
Text-to-Model: Text-Conditioned Neural Network Diffusion for Train-Once-for-All Personalization |
Zexi Li, Lingzhi Gao, Chao Wu |
arxiv |
afternoon |
Instruction-Guided Autoregressive Neural Network Parameter Generation |
Bedionita Soro, Bruno Andreis, Song Chong, Sung Ju Hwang |
arxiv |
afternoon |
ProDiF: Protecting Domain-Invariant Features to Secure Pre-Trained Models Against Extraction |
Tong Zhou, Shijin Duan, Gaowen Liu, Charles Fleming, Ramana Rao Kompella, Shaolei Ren, Xiaolin Xu |
arxiv |
afternoon |
Shape Generation via Weight Space Learning |
Maximilian Plattner, Arturs Berzins, Johannes Brandstetter |
arxiv |
afternoon |
Mimetic Initialization Helps State Space Models Learn to Recall |
Asher Trockman, Hrayr Harutyunyan, J Zico Kolter, Sanjiv Kumar, Srinadh Bhojanapalli |
|
afternoon |
Scaling Up Parameter Generation: A Recurrent Diffusion Approach |
Kai Wang, Dongwen Tang, Wangbo Zhao, Konstantin Schürholt, Zhangyang Wang, Yang You |
arxiv |
afternoon |
Learning on Model Weights using Tree Experts |
Eliahu Horwitz, Bar Cavia, Jonathan Kahana, Yedid Hoshen |
arxiv |
afternoon |
Can this Model Also Recognize Dogs? Zero-Shot Model Search from Weights |
Jonathan Kahana, Or Nathan, Eliahu Horwitz, Yedid Hoshen |
arxiv |
afternoon |
Structure Is Not Enough: Leveraging Behavior for Neural Network Weight Reconstruction |
Léo Meynent, Ivan Melev, Konstantin Schürholt, Goeran Kauermann, Damian Borth |
arxiv |
afternoon |
Can We Optimize Deep RL Policy Weights as Trajectory Modeling? |
Hongyao Tang |
arxiv |
afternoon |
The Impact of Model Zoo Size and Composition on Weight Space Learning |
Damian Falk, Konstantin Schürholt, Damian Borth |
|
afternoon |
Vanishing Feature: Diagnosing Model Merging and Beyond |
Xingyu Qu, Samuel Horváth |
arxiv |
afternoon |
End-to-End Synthesis of Neural Programs in Weight Space |
Wenhao Li, Yudong Xu, Elias Boutros Khalil, Scott Sanner |
|
afternoon |
Flow to Learn: Flow Matching on Neural Network Parameters |
Daniel Saragih, Deyu Cao, Tejas Balaji, Ashwin Santhosh |
arxiv |
afternoon |
On the internal representations of graph metanetworks |
Taesun Yeom, Jaeho Lee |
arxiv |
afternoon |
Compressive Meta-Learning |
Daniel Mas Montserrat, David Bonet, Maria Perera, Xavier Giró-i-Nieto, Alexander G. Ioannidis |
|
afternoon |
Model Assembly Learning with Heterogeneous Layer Weight Merging |
Yi-Kai Zhang, Jin Wang, Xu-Xiang Zhong, De-Chuan Zhan, Han-Jia Ye |
arxiv |
afternoon |
GNNMERGE: MERGING OF GNN MODELS WITHOUT ACCESSING TRAINING DATA |
Vipul Garg, Ishita Thakre, Sayan Ranu |
arxiv |
afternoon |