[Front cover]
Â鶹´«Ã½AV Journal on Special Areas in Information Theory information for authors
Table of contents
Editorial
An Overview of Information-Theoretic Security and Privacy: Metrics, Limits and Applications
This tutorial reviews fundamental contributions to information security. An integrative viewpoint is taken that explains the security metrics, including secrecy, privacy, and others, the methodology of information-theoretic approaches, along with the arising system design principles, as well as techniques that enable the information-theoretic designs to be applied in real communication and computing systems.
Sequential Change Detection by Optimal Weighted â„“â‚‚ Divergence
We present a new non-parametric statistic, called the weighed l2 divergence, based on empirical distributions for sequential change detection. We start by constructing the weighed l2 divergence as a fundamental building block for two-sample tests and change detection. The proposed statistic is proved to attain the optimal sample complexity in the offline setting.
Sequential (Quickest) Change Detection: Classical Results and New Directions
Online detection of changes in stochastic systems, referred to as sequential change detection or quickest change detection, is an important research topic in statistics, signal processing, and information theory, and has a wide range of applications. This survey starts with the basics of sequential change detection, and then moves on to generalizations and extensions of sequential change detection theory and methods.
One for All and All for One: Distributed Learning of Fair Allocations With Multi-Player Bandits
Consider N cooperative but non-communicating players where each plays one out of M arms for T turns. Players have different utilities for each arm, represented as an N×M matrix. These utilities are unknown to the players. In each turn, players select an arm and receive a noisy observation of their utility for it. However, if any other players selected the same arm in that turn, all colliding players will receive zero utility due to the conflict. No communication between the players is possible.
Empirical Policy Evaluation With Supergraphs
We devise algorithms for the policy evaluation problem in reinforcement learning, assuming access to a simulator and certain side information called the supergraph. Our algorithms explore backward from high-cost states to find high-value ones, in contrast to approaches that work forward from all states. While several papers have demonstrated the utility of backward exploration empirically, we conduct rigorous analyses which show that our algorithms can reduce average-case sample complexity from $O(S \log S)$ to as low as $O(\log S)$ .