LLMs tend to lose prior skills when fine-tuned for new tasks. A new self-distillation approach aims to reduce regression and ...
Abstract: A desirable objective in self-supervised learning (SSL) is to avoid feature collapse. Whitening loss guarantees collapse avoidance by minimizing the distance between embeddings of positive ...
Abstract: Owing to the cost of collecting labeled sensor data, self-supervised learning (SSL) methods for human activity recognition (HAR) that effectively use unlabeled data for pretraining have ...