Abstract: In this seminar, I shall discuss several estimators of finite population mean, when the data are infinite dimensional in nature. The performance of these estimators will be compared under different sampling designs and superpopulations satisfying linear models based on their asymptotic distributions. One of the major findings is that although the use of the auxiliary information in the estimation stage usually improves the performance of different estimators, the use of the auxiliary information in the sampling design stage often has adverse effects on the performance of these estimators. This seminar is based on a joint research work with my Ph.D. supervisor Prof. Probal Chaudhuri.
Abstract: The development of structure-preserving time integrators has been a major focus of numerical analysis for the last few decades. In the first part of my presentation, I will discuss relaxation Runge-Kutta (RK) methods, designed to preserve essential conserved quantities during time integration. I will first demonstrate how a slight modification of RK methods can be employed to conserve a single nonlinear invariant.
Subsequently, I will introduce the generalization of the relaxation approach for RK methods to conserve multiple nonlinear invariants in a dynamical system. The significance of preserving invariants and its impact on long-term error growth will be illustrated through numerical examples.
In the second part, I will address another crucial challenge in high-order time integration encountered by RK methods: the phenomenon of order reduction in RK methods applied to stiff problems, along with its remedy.
I will first illustrate this issue in RK methods and then introduce the remedy through high Weak Stage Order (WSO), capable of alleviating order reduction in linear problems with time-independent operators.
Additionally, I will briefly discuss stiff order conditions, which are more general and can eliminate order reduction for a broader class of problems, specifically semilinear problems. This extension is essential to overcome the limitations of WSO, which primarily focuses on linear problems.
Abstract: Modern biological studies often involve large-scale hypothesis testing problems, where hypotheses are organized in a Directed Acyclic Graph (DAG). It has been established through widespread research that prior structural information can play a vital role in improving the power of classical multiple testing procedures and in obtaining valid and meaningful inference. In a DAG, each node represents a hypothesis, and the edges denote a logical sequence of relationships among these hypotheses that must be taken into account by a multiple testing procedure. A hypothesis rejected by the testing procedure should also result in the rejection of all its ancestors; we term this a "legitimate rejection." We propose an intuitive approach that applies a Benjamini-Hochberg type procedure on the DAG, and filters the set of rejected hypotheses to eliminate all illegitimate rejections. Additionally, we introduce a weighted version of this procedure, where each p-value is assigned a weight proportional to the number of non-null hypotheses within the group(s) defined by its parent node(s). This approach facilitates easier rejection of p-values in groups predominantly containing non-null hypotheses, while harder rejection is applied to pvalues in groups with mostly null hypotheses. Our unweighted and weighted methods respectively simplify to the Benjamini-Hochberg procedure and the Storey-type Adaptive Benjamini-Hochberg procedure when the DAG is edge-free. Our methods are proven to control the False Discovery Rate (FDR) when applied to independent p-values. The unweighted method also control FDR for PRDS pvalues. Simulation studies confirm that the weighted data-adaptive version of our method also maintain similar FDR control, albeit under certain conditions. Our simulation studies further elucidate the scenarios where our proposed methods are more powerful than their competitors. This is a joint work with Dr. Marina Bogomolov, Technion -Israel Institute of Technology.
Abstract: Hypothesis testing problems are fundamental to the theory and practice of statistics. It is well known that when the union of the null and the alternative does not encompass the full parameter space the possibility of a Type III error arises, i.e., the null hypothesis may be rejected when neither the null nor the alternative are true. In such situations, common in the context of order restricted inference, the validity of our inferences may be severely compromised. The study of the geometry of the distance--test, a test widely used in constrained inference, illuminates circumstances in which Type III errors arise and motivates the introduction of \emph{safe tests}. Heuristically, a safe test is a test which, at least asymptotically, is free of Type III errors.
A novel safe test is proposed and studied. The new testing procedure is associated with a \emph{certificate of validity}, a pre--test indicating whether the original hypotheses are consistent with the data.
Consequently, Type III errors can be addressed in a principled way and constrained tests can be carried out without fear of systematically incorrect inferences. Although we focus on testing problems arising in order restricted inference the underlying ideas are more broadly applicable. The benefits associated with the proposed methodology are demonstrated by simulations and the analysis of several illustrative examples.