Sensitivity analyses for sparse-data problems-using weakly informative bayesian priors.
Hamra, Ghassan B; MacLehose, Richard F; Cole, Stephen R
2013-03-01
Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist.
Sensitivity Analyses for Sparse-Data Problems—Using Weakly Informative Bayesian Priors
Hamra, Ghassan B.; MacLehose, Richard F.; Cole, Stephen R.
2013-01-01
Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist. PMID:23337241
Heudtlass, Peter; Guha-Sapir, Debarati; Speybroeck, Niko
2018-05-31
The crude death rate (CDR) is one of the defining indicators of humanitarian emergencies. When data from vital registration systems are not available, it is common practice to estimate the CDR from household surveys with cluster-sampling design. However, sample sizes are often too small to compare mortality estimates to emergency thresholds, at least in a frequentist framework. Several authors have proposed Bayesian methods for health surveys in humanitarian crises. Here, we develop an approach specifically for mortality data and cluster-sampling surveys. We describe a Bayesian hierarchical Poisson-Gamma mixture model with generic (weakly informative) priors that could be used as default in absence of any specific prior knowledge, and compare Bayesian and frequentist CDR estimates using five different mortality datasets. We provide an interpretation of the Bayesian estimates in the context of an emergency threshold and demonstrate how to interpret parameters at the cluster level and ways in which informative priors can be introduced. With the same set of weakly informative priors, Bayesian CDR estimates are equivalent to frequentist estimates, for all practical purposes. The probability that the CDR surpasses the emergency threshold can be derived directly from the posterior of the mean of the mixing distribution. All observation in the datasets contribute to the estimation of cluster-level estimates, through the hierarchical structure of the model. In a context of sparse data, Bayesian mortality assessments have advantages over frequentist ones already when using only weakly informative priors. More informative priors offer a formal and transparent way of combining new data with existing data and expert knowledge and can help to improve decision-making in humanitarian crises by complementing frequentist estimates.
Mendelian randomization with Egger pleiotropy correction and weakly informative Bayesian priors.
Schmidt, A F; Dudbridge, F
2017-12-15
The MR-Egger (MRE) estimator has been proposed to correct for directional pleiotropic effects of genetic instruments in an instrumental variable (IV) analysis. The power of this method is considerably lower than that of conventional estimators, limiting its applicability. Here we propose a novel Bayesian implementation of the MR-Egger estimator (BMRE) and explore the utility of applying weakly informative priors on the intercept term (the pleiotropy estimate) to increase power of the IV (slope) estimate. This was a simulation study to compare the performance of different IV estimators. Scenarios differed in the presence of a causal effect, the presence of pleiotropy, the proportion of pleiotropic instruments and degree of 'Instrument Strength Independent of Direct Effect' (InSIDE) assumption violation. Based on empirical plasma urate data, we present an approach to elucidate a prior distribution for the amount of pleiotropy. A weakly informative prior on the intercept term increased power of the slope estimate while maintaining type 1 error rates close to the nominal value of 0.05. Under the InSIDE assumption, performance was unaffected by the presence or absence of pleiotropy. Violation of the InSIDE assumption biased all estimators, affecting the BMRE more than the MRE method. Depending on the prior distribution, the BMRE estimator has more power at the cost of an increased susceptibility to InSIDE assumption violations. As such the BMRE method is a compromise between the MRE and conventional IV estimators, and may be an especially useful approach to account for observed pleiotropy. © The Author 2017. Published by Oxford University Press on behalf of the International Epidemiological Association.
Jiang, Yu; Simon, Steve; Mayo, Matthew S; Gajewski, Byron J
2015-02-20
Slow recruitment in clinical trials leads to increased costs and resource utilization, which includes both the clinic staff and patient volunteers. Careful planning and monitoring of the accrual process can prevent the unnecessary loss of these resources. We propose two hierarchical extensions to the existing Bayesian constant accrual model: the accelerated prior and the hedging prior. The new proposed priors are able to adaptively utilize the researcher's previous experience and current accrual data to produce the estimation of trial completion time. The performance of these models, including prediction precision, coverage probability, and correct decision-making ability, is evaluated using actual studies from our cancer center and simulation. The results showed that a constant accrual model with strongly informative priors is very accurate when accrual is on target or slightly off, producing smaller mean squared error, high percentage of coverage, and a high number of correct decisions as to whether or not continue the trial, but it is strongly biased when off target. Flat or weakly informative priors provide protection against an off target prior but are less efficient when the accrual is on target. The accelerated prior performs similar to a strong prior. The hedging prior performs much like the weak priors when the accrual is extremely off target but closer to the strong priors when the accrual is on target or only slightly off target. We suggest improvements in these models and propose new models for future research. Copyright © 2014 John Wiley & Sons, Ltd.
Depaoli, Sarah
2013-06-01
Growth mixture modeling (GMM) represents a technique that is designed to capture change over time for unobserved subgroups (or latent classes) that exhibit qualitatively different patterns of growth. The aim of the current article was to explore the impact of latent class separation (i.e., how similar growth trajectories are across latent classes) on GMM performance. Several estimation conditions were compared: maximum likelihood via the expectation maximization (EM) algorithm and the Bayesian framework implementing diffuse priors, "accurate" informative priors, weakly informative priors, data-driven informative priors, priors reflecting partial-knowledge of parameters, and "inaccurate" (but informative) priors. The main goal was to provide insight about the optimal estimation condition under different degrees of latent class separation for GMM. Results indicated that optimal parameter recovery was obtained though the Bayesian approach using "accurate" informative priors, and partial-knowledge priors showed promise for the recovery of the growth trajectory parameters. Maximum likelihood and the remaining Bayesian estimation conditions yielded poor parameter recovery for the latent class proportions and the growth trajectories. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Hippocampus segmentation using locally weighted prior based level set
NASA Astrophysics Data System (ADS)
Achuthan, Anusha; Rajeswari, Mandava
2015-12-01
Segmentation of hippocampus in the brain is one of a major challenge in medical image segmentation due to its' imaging characteristics, with almost similar intensity between another adjacent gray matter structure, such as amygdala. The intensity similarity has causes the hippocampus to have weak or fuzzy boundaries. With this main challenge being demonstrated by hippocampus, a segmentation method that relies on image information alone may not produce accurate segmentation results. Therefore, it is needed an assimilation of prior information such as shape and spatial information into existing segmentation method to produce the expected segmentation. Previous studies has widely integrated prior information into segmentation methods. However, the prior information has been utilized through a global manner integration, and this does not reflect the real scenario during clinical delineation. Therefore, in this paper, a locally integrated prior information into a level set model is presented. This work utilizes a mean shape model to provide automatic initialization for level set evolution, and has been integrated as prior information into the level set model. The local integration of edge based information and prior information has been implemented through an edge weighting map that decides at voxel level which information need to be observed during a level set evolution. The edge weighting map shows which corresponding voxels having sufficient edge information. Experiments shows that the proposed integration of prior information locally into a conventional edge-based level set model, known as geodesic active contour has shown improvement of 9% in averaged Dice coefficient.
Bayesian structural equation modeling in sport and exercise psychology.
Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus
2015-08-01
Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.
Controlling quantum memory-assisted entropic uncertainty in non-Markovian environments
NASA Astrophysics Data System (ADS)
Zhang, Yanliang; Fang, Maofa; Kang, Guodong; Zhou, Qingping
2018-03-01
Quantum memory-assisted entropic uncertainty relation (QMA EUR) addresses that the lower bound of Maassen and Uffink's entropic uncertainty relation (without quantum memory) can be broken. In this paper, we investigated the dynamical features of QMA EUR in the Markovian and non-Markovian dissipative environments. It is found that dynamical process of QMA EUR is oscillation in non-Markovian environment, and the strong interaction is favorable for suppressing the amount of entropic uncertainty. Furthermore, we presented two schemes by means of prior weak measurement and posterior weak measurement reversal to control the amount of entropic uncertainty of Pauli observables in dissipative environments. The numerical results show that the prior weak measurement can effectively reduce the wave peak values of the QMA-EUA dynamic process in non-Markovian environment for long periods of time, but it is ineffectual on the wave minima of dynamic process. However, the posterior weak measurement reversal has an opposite effects on the dynamic process. Moreover, the success probability entirely depends on the quantum measurement strength. We hope that our proposal could be verified experimentally and might possibly have future applications in quantum information processing.
Multilevel modeling of single-case data: A comparison of maximum likelihood and Bayesian estimation.
Moeyaert, Mariola; Rindskopf, David; Onghena, Patrick; Van den Noortgate, Wim
2017-12-01
The focus of this article is to describe Bayesian estimation, including construction of prior distributions, and to compare parameter recovery under the Bayesian framework (using weakly informative priors) and the maximum likelihood (ML) framework in the context of multilevel modeling of single-case experimental data. Bayesian estimation results were found similar to ML estimation results in terms of the treatment effect estimates, regardless of the functional form and degree of information included in the prior specification in the Bayesian framework. In terms of the variance component estimates, both the ML and Bayesian estimation procedures result in biased and less precise variance estimates when the number of participants is small (i.e., 3). By increasing the number of participants to 5 or 7, the relative bias is close to 5% and more precise estimates are obtained for all approaches, except for the inverse-Wishart prior using the identity matrix. When a more informative prior was added, more precise estimates for the fixed effects and random effects were obtained, even when only 3 participants were included. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Meta-analysis of few small studies in orphan diseases.
Friede, Tim; Röver, Christian; Wandel, Simon; Neuenschwander, Beat
2017-03-01
Meta-analyses in orphan diseases and small populations generally face particular problems, including small numbers of studies, small study sizes and heterogeneity of results. However, the heterogeneity is difficult to estimate if only very few studies are included. Motivated by a systematic review in immunosuppression following liver transplantation in children, we investigate the properties of a range of commonly used frequentist and Bayesian procedures in simulation studies. Furthermore, the consequences for interval estimation of the common treatment effect in random-effects meta-analysis are assessed. The Bayesian credibility intervals using weakly informative priors for the between-trial heterogeneity exhibited coverage probabilities in excess of the nominal level for a range of scenarios considered. However, they tended to be shorter than those obtained by the Knapp-Hartung method, which were also conservative. In contrast, methods based on normal quantiles exhibited coverages well below the nominal levels in many scenarios. With very few studies, the performance of the Bayesian credibility intervals is of course sensitive to the specification of the prior for the between-trial heterogeneity. In conclusion, the use of weakly informative priors as exemplified by half-normal priors (with a scale of 0.5 or 1.0) for log odds ratios is recommended for applications in rare diseases. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.
Category Change in the Absence of Cognitive Conflict
ERIC Educational Resources Information Center
Ramsburg, Jared T.; Ohlsson, Stellan
2016-01-01
The cognitive conflict hypothesis asserts that information that directly contradicts a prior conception is 1 of the prerequisites for conceptual change and other forms of nonmonotonic learning. There have been numerous attempts to support this hypothesis by adding a conflict intervention to learning scenarios with weak outcomes. Outcomes have been…
Avoiding Boundary Estimates in Hierarchical Linear Models through Weakly Informative Priors
ERIC Educational Resources Information Center
Chung, Yeojin; Rabe-Hesketh, Sophia; Gelman, Andrew; Dorie, Vincent; Liu, Jinchen
2012-01-01
Hierarchical or multilevel linear models are widely used for longitudinal or cross-sectional data on students nested in classes and schools, and are particularly important for estimating treatment effects in cluster-randomized trials, multi-site trials, and meta-analyses. The models can allow for variation in treatment effects, as well as…
Weakly Informative Prior for Point Estimation of Covariance Matrices in Hierarchical Models
ERIC Educational Resources Information Center
Chung, Yeojin; Gelman, Andrew; Rabe-Hesketh, Sophia; Liu, Jingchen; Dorie, Vincent
2015-01-01
When fitting hierarchical regression models, maximum likelihood (ML) estimation has computational (and, for some users, philosophical) advantages compared to full Bayesian inference, but when the number of groups is small, estimates of the covariance matrix (S) of group-level varying coefficients are often degenerate. One can do better, even from…
Efficient structure from motion on large scenes using UAV with position and pose information
NASA Astrophysics Data System (ADS)
Teng, Xichao; Yu, Qifeng; Shang, Yang; Luo, Jing; Wang, Gang
2018-04-01
In this paper, we exploit prior information from global positioning systems and inertial measurement units to speed up the process of large scene reconstruction from images acquired by Unmanned Aerial Vehicles. We utilize weak pose information and intrinsic parameter to obtain the projection matrix for each view. As compared to unmanned aerial vehicles' flight altitude, topographic relief can usually be ignored, we assume that the scene is flat and use weak perspective camera to get projective transformations between two views. Furthermore, we propose an overlap criterion and select potentially matching view pairs between projective transformed views. A robust global structure from motion method is used for image based reconstruction. Our real world experiments show that the approach is accurate, scalable and computationally efficient. Moreover, projective transformations between views can also be used to eliminate false matching.
A cautionary note on Bayesian estimation of population size by removal sampling with diffuse priors.
Bord, Séverine; Bioche, Christèle; Druilhet, Pierre
2018-05-01
We consider the problem of estimating a population size by removal sampling when the sampling rate is unknown. Bayesian methods are now widespread and allow to include prior knowledge in the analysis. However, we show that Bayes estimates based on default improper priors lead to improper posteriors or infinite estimates. Similarly, weakly informative priors give unstable estimators that are sensitive to the choice of hyperparameters. By examining the likelihood, we show that population size estimates can be stabilized by penalizing small values of the sampling rate or large value of the population size. Based on theoretical results and simulation studies, we propose some recommendations on the choice of the prior. Then, we applied our results to real datasets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Information gains from cosmological probes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grandis, S.; Seehars, S.; Refregier, A.
In light of the growing number of cosmological observations, it is important to develop versatile tools to quantify the constraining power and consistency of cosmological probes. Originally motivated from information theory, we use the relative entropy to compute the information gained by Bayesian updates in units of bits. This measure quantifies both the improvement in precision and the 'surprise', i.e. the tension arising from shifts in central values. Our starting point is a WMAP9 prior which we update with observations of the distance ladder, supernovae (SNe), baryon acoustic oscillations (BAO), and weak lensing as well as the 2015 Planck release.more » We consider the parameters of the flat ΛCDM concordance model and some of its extensions which include curvature and Dark Energy equation of state parameter w . We find that, relative to WMAP9 and within these model spaces, the probes that have provided the greatest gains are Planck (10 bits), followed by BAO surveys (5.1 bits) and SNe experiments (3.1 bits). The other cosmological probes, including weak lensing (1.7 bits) and (H{sub 0}) measures (1.7 bits), have contributed information but at a lower level. Furthermore, we do not find any significant surprise when updating the constraints of WMAP9 with any of the other experiments, meaning that they are consistent with WMAP9. However, when we choose Planck15 as the prior, we find that, accounting for the full multi-dimensionality of the parameter space, the weak lensing measurements of CFHTLenS produce a large surprise of 4.4 bits which is statistically significant at the 8 σ level. We discuss how the relative entropy provides a versatile and robust framework to compare cosmological probes in the context of current and future surveys.« less
ERIC Educational Resources Information Center
Gordovil-Merino, Amalia; Guardia-Olmos, Joan; Pero-Cebollero, Maribel
2012-01-01
In this paper, we used simulations to compare the performance of classical and Bayesian estimations in logistic regression models using small samples. In the performed simulations, conditions were varied, including the type of relationship between independent and dependent variable values (i.e., unrelated and related values), the type of variable…
Protecting quantum Fisher information in curved space-time
NASA Astrophysics Data System (ADS)
Huang, Zhiming
2018-03-01
In this work, we investigate the quantum Fisher information (QFI) dynamics of a two-level atom interacting with quantized conformally coupled massless scalar fields in de Sitter-invariant vacuum. We first derive the master equation that governs its evolution. It is found that the QFI decays with evolution time. Furthermore, we propose two schemes to protect QFI by employing prior weak measurement (WM) and post measurement reversal (MR). We find that the first scheme can not always protect QFI and the second scheme has prominent advantage over the first scheme.
Deciphering the Balkan Enigma: Using History to Inform Policy. Revised Edition,
1995-11-07
After long experience with German interference in Serbian/Yugoslav affairs (1878, 1908, 1914 - 1918 ), it should not be surprising that the Yugoslav...background to the religious, as well as political differences between the various branches of the church and the division, see Robert R. Palmer and Joel...comprehensive solution prior to embarking on incremental ways to resolve issues. _ Political institutions are weak. This condition complicates
Thorlund, Kristian; Thabane, Lehana; Mills, Edward J
2013-01-11
Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.
NASA Astrophysics Data System (ADS)
Agapiou, Sergios; Burger, Martin; Dashti, Masoumeh; Helin, Tapio
2018-04-01
We consider the inverse problem of recovering an unknown functional parameter u in a separable Banach space, from a noisy observation vector y of its image through a known possibly non-linear map {{\\mathcal G}} . We adopt a Bayesian approach to the problem and consider Besov space priors (see Lassas et al (2009 Inverse Problems Imaging 3 87-122)), which are well-known for their edge-preserving and sparsity-promoting properties and have recently attracted wide attention especially in the medical imaging community. Our key result is to show that in this non-parametric setup the maximum a posteriori (MAP) estimates are characterized by the minimizers of a generalized Onsager-Machlup functional of the posterior. This is done independently for the so-called weak and strong MAP estimates, which as we show coincide in our context. In addition, we prove a form of weak consistency for the MAP estimators in the infinitely informative data limit. Our results are remarkable for two reasons: first, the prior distribution is non-Gaussian and does not meet the smoothness conditions required in previous research on non-parametric MAP estimates. Second, the result analytically justifies existing uses of the MAP estimate in finite but high dimensional discretizations of Bayesian inverse problems with the considered Besov priors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dietrich, J.P.; et al.
Uncertainty in the mass-observable scaling relations is currently the limiting factor for galaxy cluster based cosmology. Weak gravitational lensing can provide a direct mass calibration and reduce the mass uncertainty. We present new ground-based weak lensing observations of 19 South Pole Telescope (SPT) selected clusters and combine them with previously reported space-based observations of 13 galaxy clusters to constrain the cluster mass scaling relations with the Sunyaev-Zel'dovich effect (SZE), the cluster gas massmore » $$M_\\mathrm{gas}$$, and $$Y_\\mathrm{X}$$, the product of $$M_\\mathrm{gas}$$ and X-ray temperature. We extend a previously used framework for the analysis of scaling relations and cosmological constraints obtained from SPT-selected clusters to make use of weak lensing information. We introduce a new approach to estimate the effective average redshift distribution of background galaxies and quantify a number of systematic errors affecting the weak lensing modelling. These errors include a calibration of the bias incurred by fitting a Navarro-Frenk-White profile to the reduced shear using $N$-body simulations. We blind the analysis to avoid confirmation bias. We are able to limit the systematic uncertainties to 6.4% in cluster mass (68% confidence). Our constraints on the mass-X-ray observable scaling relations parameters are consistent with those obtained by earlier studies, and our constraints for the mass-SZE scaling relation are consistent with the the simulation-based prior used in the most recent SPT-SZ cosmology analysis. We can now replace the external mass calibration priors used in previous SPT-SZ cosmology studies with a direct, internal calibration obtained on the same clusters.« less
2013-01-01
Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Putter, Roland; Doré, Olivier; Das, Sudeep
2014-01-10
Cross correlations between the galaxy number density in a lensing source sample and that in an overlapping spectroscopic sample can in principle be used to calibrate the lensing source redshift distribution. In this paper, we study in detail to what extent this cross-correlation method can mitigate the loss of cosmological information in upcoming weak lensing surveys (combined with a cosmic microwave background prior) due to lack of knowledge of the source distribution. We consider a scenario where photometric redshifts are available and find that, unless the photometric redshift distribution p(z {sub ph}|z) is calibrated very accurately a priori (bias andmore » scatter known to ∼0.002 for, e.g., EUCLID), the additional constraint on p(z {sub ph}|z) from the cross-correlation technique to a large extent restores the cosmological information originally lost due to the uncertainty in dn/dz(z). Considering only the gain in photo-z accuracy and not the additional cosmological information, enhancements of the dark energy figure of merit of up to a factor of four (40) can be achieved for a SuMIRe-like (EUCLID-like) combination of lensing and redshift surveys, where SuMIRe stands for Subaru Measurement of Images and Redshifts). However, the success of the method is strongly sensitive to our knowledge of the galaxy bias evolution in the source sample and we find that a percent level bias prior is needed to optimize the gains from the cross-correlation method (i.e., to approach the cosmology constraints attainable if the bias was known exactly).« less
Glimpse: Sparsity based weak lensing mass-mapping tool
NASA Astrophysics Data System (ADS)
Lanusse, F.; Starck, J.-L.; Leonard, A.; Pires, S.
2018-02-01
Glimpse, also known as Glimpse2D, is a weak lensing mass-mapping tool that relies on a robust sparsity-based regularization scheme to recover high resolution convergence from either gravitational shear alone or from a combination of shear and flexion. Including flexion allows the supplementation of the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map. To preserve all available small scale information, Glimpse avoids any binning of the irregularly sampled input shear and flexion fields and treats the mass-mapping problem as a general ill-posed inverse problem, regularized using a multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators.
Optimal Multiple Surface Segmentation With Shape and Context Priors
Bai, Junjie; Garvin, Mona K.; Sonka, Milan; Buatti, John M.; Wu, Xiaodong
2014-01-01
Segmentation of multiple surfaces in medical images is a challenging problem, further complicated by the frequent presence of weak boundary evidence, large object deformations, and mutual influence between adjacent objects. This paper reports a novel approach to multi-object segmentation that incorporates both shape and context prior knowledge in a 3-D graph-theoretic framework to help overcome the stated challenges. We employ an arc-based graph representation to incorporate a wide spectrum of prior information through pair-wise energy terms. In particular, a shape-prior term is used to penalize local shape changes and a context-prior term is used to penalize local surface-distance changes from a model of the expected shape and surface distances, respectively. The globally optimal solution for multiple surfaces is obtained by computing a maximum flow in a low-order polynomial time. The proposed method was validated on intraretinal layer segmentation of optical coherence tomography images and demonstrated statistically significant improvement of segmentation accuracy compared to our earlier graph-search method that was not utilizing shape and context priors. The mean unsigned surface positioning errors obtained by the conventional graph-search approach (6.30 ± 1.58 μm) was improved to 5.14 ± 0.99 μm when employing our new method with shape and context priors. PMID:23193309
Systematic effects on dark energy from 3D weak shear
NASA Astrophysics Data System (ADS)
Kitching, T. D.; Taylor, A. N.; Heavens, A. F.
2008-09-01
We present an investigation into the potential effect of systematics inherent in multiband wide-field surveys on the dark energy equation-of-state determination for two 3D weak lensing methods. The weak lensing methods are a geometric shear-ratio method and 3D cosmic shear. The analysis here uses an extension of the Fisher matrix framework to include jointly photometric redshift systematics, shear distortion systematics and intrinsic alignments. Using analytic parametrizations of these three primary systematic effects allows an isolation of systematic parameters of particular importance. We show that assuming systematic parameters are fixed, but possibly biased, results in potentially large biases in dark energy parameters. We quantify any potential bias by defining a Bias Figure of Merit. By marginalizing over extra systematic parameters, such biases are negated at the expense of an increase in the cosmological parameter errors. We show the effect on the dark energy Figure of Merit of marginalizing over each systematic parameter individually. We also show the overall reduction in the Figure of Merit due to all three types of systematic effects. Based on some assumption of the likely level of systematic errors, we find that the largest effect on the Figure of Merit comes from uncertainty in the photometric redshift systematic parameters. These can reduce the Figure of Merit by up to a factor of 2 to 4 in both 3D weak lensing methods, if no informative prior on the systematic parameters is applied. Shear distortion systematics have a smaller overall effect. Intrinsic alignment effects can reduce the Figure of Merit by up to a further factor of 2. This, however, is a worst-case scenario, within the assumptions of the parametrizations used. By including prior information on systematic parameters, the Figure of Merit can be recovered to a large extent, and combined constraints from 3D cosmic shear and shear ratio are robust to systematics. We conclude that, as a rule of thumb, given a realistic current understanding of intrinsic alignments and photometric redshifts, then including all three primary systematic effects reduces the Figure of Merit by at most a factor of 2.
Reznikov, Roman; Diwan, Mustansir; Nobrega, José N; Hamani, Clement
2015-02-01
Most of the available preclinical models of PTSD have focused on isolated behavioural aspects and have not considered individual variations in response to stress. We employed behavioural criteria to identify and characterize a subpopulation of rats that present several features analogous to PTSD-like states after exposure to classical fear conditioning. Outbred Sprague-Dawley rats were segregated into weak- and strong-extinction groups on the basis of behavioural scores during extinction of conditioned fear responses. Animals were subsequently tested for anxiety-like behaviour in the open-field test (OFT), novelty suppressed feeding (NSF) and elevated plus maze (EPM). Baseline plasma corticosterone was measured prior to any behavioural manipulation. In a second experiment, rats underwent OFT, NSF and EPM prior to being subjected to fear conditioning to ascertain whether or not pre-stress levels of anxiety-like behaviours could predict extinction scores. We found that 25% of rats exhibit low extinction rates of conditioned fear, a feature that was associated with increased anxiety-like behaviour across multiple tests in comparison to rats showing strong extinction. In addition, weak-extinction animals showed low levels of corticosterone prior to fear conditioning, a variable that seemed to predict extinction recall scores. In a separate experiment, anxiety measures taken prior to fear conditioning were not predictive of a weak-extinction phenotype, suggesting that weak-extinction animals do not show detectable traits of anxiety in the absence of a stressful experience. These findings suggest that extinction impairment may be used to identify stress-vulnerable rats, thus providing a useful model for elucidating mechanisms and investigating potential treatments for PTSD. Copyright © 2014 Elsevier Ltd. All rights reserved.
Adaptive power priors with empirical Bayes for clinical trials.
Gravestock, Isaac; Held, Leonhard
2017-09-01
Incorporating historical information into the design and analysis of a new clinical trial has been the subject of much discussion as a way to increase the feasibility of trials in situations where patients are difficult to recruit. The best method to include this data is not yet clear, especially in the case when few historical studies are available. This paper looks at the power prior technique afresh in a binomial setting and examines some previously unexamined properties, such as Box P values, bias, and coverage. Additionally, it proposes an empirical Bayes-type approach to estimating the prior weight parameter by marginal likelihood. This estimate has advantages over previously criticised methods in that it varies commensurably with differences in the historical and current data and can choose weights near 1 when the data are similar enough. Fully Bayesian approaches are also considered. An analysis of the operating characteristics shows that the adaptive methods work well and that the various approaches have different strengths and weaknesses. Copyright © 2017 John Wiley & Sons, Ltd.
Mills, Travis; Lalancette, Marc; Moses, Sandra N; Taylor, Margot J; Quraan, Maher A
2012-07-01
Magnetoencephalography provides precise information about the temporal dynamics of brain activation and is an ideal tool for investigating rapid cognitive processing. However, in many cognitive paradigms visual stimuli are used, which evoke strong brain responses (typically 40-100 nAm in V1) that may impede the detection of weaker activations of interest. This is particularly a concern when beamformer algorithms are used for source analysis, due to artefacts such as "leakage" of activation from the primary visual sources into other regions. We have previously shown (Quraan et al. 2011) that we can effectively reduce leakage patterns and detect weak hippocampal sources by subtracting the functional images derived from the experimental task and a control task with similar stimulus parameters. In this study we assess the performance of three different subtraction techniques. In the first technique we follow the same post-localization subtraction procedures as in our previous work. In the second and third techniques, we subtract the sensor data obtained from the experimental and control paradigms prior to source localization. Using simulated signals embedded in real data, we show that when beamformers are used, subtraction prior to source localization allows for the detection of weaker sources and higher localization accuracy. The improvement in localization accuracy exceeded 10 mm at low signal-to-noise ratios, and sources down to below 5 nAm were detected. We applied our techniques to empirical data acquired with two different paradigms designed to evoke hippocampal and frontal activations, and demonstrated our ability to detect robust activations in both regions with substantial improvements over image subtraction. We conclude that removal of the common-mode dominant sources through data subtraction prior to localization further improves the beamformer's ability to project the n-channel sensor-space data to reveal weak sources of interest and allows more accurate localization.
Zhao, Jing; Zong, Haili
2018-01-01
In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.
Howell, W.D.
1957-08-20
An apparatus for automatically recording the results of counting operations on trains of electrical pulses is described. The disadvantages of prior devices utilizing the two common methods of obtaining the count rate are overcome by this apparatus; in the case of time controlled operation, the disclosed system automatically records amy information stored by the scaler but not transferred to the printer at the end of the predetermined time controlled operations and, in the case of count controlled operation, provision is made to prevent a weak sample from occupying the apparatus for an excessively long period of time.
The cross-correlation between 3D cosmic shear and the integrated Sachs-Wolfe effect
NASA Astrophysics Data System (ADS)
Zieser, Britta; Merkel, Philipp M.
2016-06-01
We present the first calculation of the cross-correlation between 3D cosmic shear and the integrated Sachs-Wolfe (iSW) effect. Both signals are combined in a single formalism, which permits the computation of the full covariance matrix. In order to avoid the uncertainties presented by the non-linear evolution of the matter power spectrum and intrinsic alignments of galaxies, our analysis is restricted to large scales, I.e. multipoles below ℓ = 1000. We demonstrate in a Fisher analysis that this reduction compared to other studies of 3D weak lensing extending to smaller scales is compensated by the information that is gained if the additional iSW signal and in particular its cross-correlation with lensing data are considered. Given the observational standards of upcoming weak-lensing surveys like Euclid, marginal errors on cosmological parameters decrease by 10 per cent compared to a cosmic shear experiment if both types of information are combined without a cosmic wave background (CMB) prior. Once the constraining power of CMB data is added, the improvement becomes marginal.
Rational hypocrisy: a Bayesian analysis based on informal argumentation and slippery slopes.
Rai, Tage S; Holyoak, Keith J
2014-01-01
Moral hypocrisy is typically viewed as an ethical accusation: Someone is applying different moral standards to essentially identical cases, dishonestly claiming that one action is acceptable while otherwise equivalent actions are not. We suggest that in some instances the apparent logical inconsistency stems from different evaluations of a weak argument, rather than dishonesty per se. Extending Corner, Hahn, and Oaksford's (2006) analysis of slippery slope arguments, we develop a Bayesian framework in which accusations of hypocrisy depend on inferences of shared category membership between proposed actions and previous standards, based on prior probabilities that inform the strength of competing hypotheses. Across three experiments, we demonstrate that inferences of hypocrisy increase as perceptions of the likelihood of shared category membership between precedent cases and current cases increase, that these inferences follow established principles of category induction, and that the presence of self-serving motives increases inferences of hypocrisy independent of changes in the actions themselves. Taken together, these results demonstrate that Bayesian analyses of weak arguments may have implications for assessing moral reasoning. © 2014 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Liu, Xiangkun; Li, Baojiu; Zhao, Gong-Bo; Chiu, Mu-Chen; Fang, Wei; Pan, Chuzhong; Wang, Qiao; Du, Wei; Yuan, Shuo; Fu, Liping; Fan, Zuhui
2016-07-01
In this Letter, we report the observational constraints on the Hu-Sawicki f (R ) theory derived from weak lensing peak abundances, which are closely related to the mass function of massive halos. In comparison with studies using optical or x-ray clusters of galaxies, weak lensing peak analyses have the advantages of not relying on mass-baryonic observable calibrations. With observations from the Canada-France-Hawaii-Telescope Lensing Survey, our peak analyses give rise to a tight constraint on the model parameter |fR 0| for n =1 . The 95% C.L. is log10|fR 0|<-4.82 given WMAP9 priors on (Ωm , As ). With Planck15 priors, the corresponding result is log10|fR 0|<-5.16 .
NASA Astrophysics Data System (ADS)
Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang
2016-04-01
Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.
A Bayesian Alternative for Multi-objective Ecohydrological Model Specification
NASA Astrophysics Data System (ADS)
Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.
2015-12-01
Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.
Meteorological variables to aid forecasting deep slab avalanches on persistent weak layers
Marienthal, Alex; Hendrikx, Jordy; Birkeland, Karl; Irvine, Kathryn M.
2015-01-01
Deep slab avalanches are particularly challenging to forecast. These avalanches are difficult to trigger, yet when they release they tend to propagate far and can result in large and destructive avalanches. We utilized a 44-year record of avalanche control and meteorological data from Bridger Bowl ski area in southwest Montana to test the usefulness of meteorological variables for predicting seasons and days with deep slab avalanches. We defined deep slab avalanches as those that failed on persistent weak layers deeper than 0.9 m, and that occurred after February 1st. Previous studies often used meteorological variables from days prior to avalanches, but we also considered meteorological variables over the early months of the season. We used classification trees and random forests for our analyses. Our results showed seasons with either dry or wet deep slabs on persistent weak layers typically had less precipitation from November through January than seasons without deep slabs on persistent weak layers. Days with deep slab avalanches on persistent weak layers often had warmer minimum 24-hour air temperatures, and more precipitation over the prior seven days, than days without deep slabs on persistent weak layers. Days with deep wet slab avalanches on persistent weak layers were typically preceded by three days of above freezing air temperatures. Seasonal and daily meteorological variables were found useful to aid forecasting dry and wet deep slab avalanches on persistent weak layers, and should be used in combination with continuous observation of the snowpack and avalanche activity.
Relation Extraction with Weak Supervision and Distributional Semantics
2013-05-01
DATES COVERED 00-00-2013 to 00-00-2013 4 . TITLE AND SUBTITLE Relation Extraction with Weak Supervision and Distributional Semantics 5a...ix List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x 1 Introduction 1 2 Prior Work 4 ...2.1 Supervised relation extraction . . . . . . . . . . . . . . . . . . . . . 4 2.2 Distant supervision for relation extraction
New probe of magnetic fields in the prereionization epoch. I. Formalism
NASA Astrophysics Data System (ADS)
Venumadhav, Tejaswi; Oklopčić, Antonija; Gluscevic, Vera; Mishra, Abhilash; Hirata, Christopher M.
2017-04-01
We propose a method of measuring extremely weak magnetic fields in the intergalactic medium prior to and during the epoch of cosmic reionization. The method utilizes the Larmor precession of spin-polarized neutral hydrogen in the triplet state of the hyperfine transition. This precession leads to a systematic change in the brightness temperature fluctuations of the 21-cm line from the high-redshift universe, and thus the statistics of these fluctuations encode information about the magnetic field the atoms are immersed in. The method is most suited to probing fields that are coherent on large scales; in this paper, we consider a homogenous magnetic field over the scale of the 21-cm fluctuations. Due to the long lifetime of the triplet state of the 21-cm transition, this technique is naturally sensitive to extremely weak field strengths, of order 10-19 G at a reference redshift of ˜20 (or 10-21 G if scaled to the present day). Therefore, this might open up the possibility of probing primordial magnetic fields just prior to reionization. If the magnetic fields are much stronger, it is still possible to use this method to infer their direction, and place a lower limit on their strength. In this paper (Paper I in a series on this effect), we perform detailed calculations of the microphysics behind this effect, and take into account all the processes that affect the hyperfine transition, including radiative decays, collisions, and optical pumping by Lyman-α photons. We conclude with an analytic formula for the brightness temperature of linear-regime fluctuations in the presence of a magnetic field, and discuss its limiting behavior for weak and strong fields.
Meteorological variables associated with deep slab avalanches on persistent weak layers
Marienthal, Alex; Hendrikx, Jordy; Birkeland, Karl; Irvine, Kathryn M.
2014-01-01
Deep slab avalanches are a particularly challenging avalanche forecasting problem. These avalanches are typically difficult to trigger, yet when they are triggered they tend to propagate far and result in large and destructive avalanches. For this work we define deep slab avalanches as those that fail on persistent weak layers deeper than 0.9m (3 feet), and that occur after February 1st. We utilized a 44-year record of avalanche control and meteorological data from Bridger Bowl Ski Area to test the usefulness of meteorological variables for predicting deep slab avalanches. As in previous studies, we used data from the days preceding deep slab cycles, but we also considered meteorological metrics over the early months of the season. We utilized classification trees for our analyses. Our results showed warmer temperatures in the prior twenty-four hours and more loading over the seven days before days with deep slab avalanches on persistent weak layers. In line with previous research, extended periods of above freezing temperatures led to days with deep wet slab avalanches on persistent weak layers. Seasons with either dry or wet avalanches on deep persistent weak layers typically had drier early months, and often had some significant snow depth prior to those dry months. This paper provides insights for ski patrollers, guides, and avalanche forecasters who struggle to forecast deep slab avalanches on persistent weak layers late in the season.
Jiang, Ximiao; Huang, Baoshan; Yan, Xuedong; Zaretzki, Russell L; Richards, Stephen
2013-01-01
The severity of traffic-related injuries has been studied by many researchers in recent decades. However, the evaluation of many factors is still in dispute and, until this point, few studies have taken into account pavement management factors as points of interest. The objective of this article is to evaluate the combined influences of pavement management factors and traditional traffic engineering factors on the injury severity of 2-vehicle crashes. This study examines 2-vehicle rear-end, sideswipe, and angle collisions that occurred on Tennessee state routes from 2004 to 2008. Both the traditional ordered probit (OP) model and Bayesian ordered probit (BOP) model with weak informative prior were fitted for each collision type. The performances of these models were evaluated based on the parameter estimates and deviances. The results indicated that pavement management factors played identical roles in all 3 collision types. Pavement serviceability produces significant positive effects on the severity of injuries. The pavement distress index (PDI), rutting depth (RD), and rutting depth difference between right and left wheels (RD_df) were not significant in any of these 3 collision types. The effects of traffic engineering factors varied across collision types, except that a few were consistently significant in all 3 collision types, such as annual average daily traffic (AADT), rural-urban location, speed limit, peaking hour, and light condition. The findings of this study indicated that improved pavement quality does not necessarily lessen the severity of injuries when a 2-vehicle crash occurs. The effects of traffic engineering factors are not universal but vary by the type of crash. The study also found that the BOP model with a weak informative prior can be used as an alternative but was not superior to the traditional OP model in terms of overall performance.
Layer detection and snowpack stratigraphy characterisation from digital penetrometer signals
NASA Astrophysics Data System (ADS)
Floyer, James Antony
Forecasting for slab avalanches benefits from precise measurements of snow stratigraphy. Snow penetrometers offer the possibility of providing detailed information about snowpack structure; however, their use has yet to be adopted by avalanche forecasting operations in Canada. A manually driven, variable rate force-resistance penetrometer is tested for its ability to measure snowpack information suitable for avalanche forecasting and for spatial variability studies on snowpack properties. Subsequent to modifications, weak layers of 5 mm thick are reliably detected from the penetrometer signals. Rate effects are investigated and found to be insignificant for push velocities between 0.5 to 100 cm s-1 for dry snow. An analysis of snow deformation below the penetrometer tip is presented using particle image velocimetry and two zones associated with particle deflection are identified. The compacted zone is a region of densified snow that is pushed ahead of the penetrometer tip; the deformation zone is a broader zone surrounding the compacted zone, where deformation is in compression and in shear. Initial formation of the compacted zone is responsible for pronounced force spikes in the penetrometer signal. A layer tracing algorithm for tracing weak layers, crusts and interfaces across transects or grids of penetrometer profiles is presented. This algorithm uses Wiener spiking deconvolution to detect a portion of the signal manually identified as a layer in one profile across to an adjacent profile. Layer tracing is found to be most effective for tracing crusts and prominent weak layers, although weak layers close to crusts were not well traced. A framework for extending this method for detecting weak layers with no prior knowledge of weak layer existence is also presented. A study relating the fracture character of layers identified in compression tests is presented. A multivariate model is presented that distinguishes between sudden and other fracture characters 80% of the time. Transects of penetrometer profiles are presented over several alpine terrain features commonly associated with spatial variability of snowpack properties. Physical processes relating to the variability of certain snowpack properties revealed in the transects is discussed. The importance of characteristic signatures for training avalanche practitioners to recognise potentially unstable terrain is also discussed.
Enhancing QKD security with weak measurements
NASA Astrophysics Data System (ADS)
Farinholt, Jacob M.; Troupe, James E.
2016-10-01
Publisher's Note: This paper, originally published on 10/24/2016, was replaced with a corrected/revised version on 11/8/2016. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. In the late 1980s, Aharonov and colleagues developed the notion of a weak measurement of a quantum observable that does not appreciably disturb the system.1, 2 The measurement results are conditioned on both the pre-selected and post-selected state of the quantum system. While any one measurement reveals very little information, by making the same measurement on a large ensemble of identically prepared pre- and post-selected (PPS) states and averaging the results, one may obtain what is known as the weak value of the observable with respect to that PPS ensemble. Recently, weak measurements have been proposed as a method of assessing the security of QKD in the well-known BB84 protocol.3 This weak value augmented QKD protocol (WV-QKD) works by additionally requiring the receiver, Bob, to make a weak measurement of a particular observable prior to his strong measurement. For the subset of measurement results in which Alice and Bob's measurement bases do not agree, the weak measurement results can be used to detect any attempt by an eavesdropper, Eve, to correlate her measurement results with Bob's. Furthermore, the well-known detector blinding attacks, which are known to perfectly correlate Eve's results with Bob's without being caught by conventional BB84 implementations, actually make the eavesdropper more visible in the new WV-QKD protocol. In this paper, we will introduce the WV-QKD protocol and discuss its generalization to the 6-state single qubit protocol. We will discuss the types of weak measurements that are optimal for this protocol, and compare the predicted performance of the 6- and 4-state WV-QKD protocols.
Pereira, L C; Kerr, J; Jolles, B M
2016-08-01
Using a systematic review, we investigated whether there is an increased risk of post-operative infection in patients who have received an intra-articular corticosteroid injection to the hip for osteoarthritis prior to total hip arthroplasty (THA). Studies dealing with an intra-articular corticosteroid injection to the hip and infection following subsequent THA were identified from databases for the period between 1990 to 2013. Retrieved articles were independently assessed for their methodological quality. A total of nine studies met the inclusion criteria. Two recommended against a steroid injection prior to THA and seven found no risk with an injection. No prospective controlled trials were identified. Most studies were retrospective. Lack of information about the methodology was a consistent flaw. The literature in this area is scarce and the evidence is weak. Most studies were retrospective, and confounding factors were poorly defined or not addressed. There is thus currently insufficient evidence to conclude that an intra-articular corticosteroid injection administered prior to THA increases the rate of infection. High quality, multicentre randomised trials are needed to address this issue. Cite this article: Bone Joint J 2016;98-B:1027-35. ©2016 The British Editorial Society of Bone & Joint Surgery.
State-dependent rotations of spins by weak measurements
NASA Astrophysics Data System (ADS)
Miller, D. J.
2011-03-01
It is shown that a weak measurement of a quantum system produces a new state of the quantum system which depends on the prior state, as well as the (uncontrollable) measured position of the pointer variable of the weak-measurement apparatus. The result imposes a constraint on hidden-variable theories which assign a different state to a quantum system than standard quantum mechanics. The constraint means that a crypto-nonlocal hidden-variable theory can be ruled out in a more direct way than previously done.
Negative Transfer and Positive Interference: Some Confusion in Introductory Psychology Textbooks.
ERIC Educational Resources Information Center
Reid, Edward
1981-01-01
Discusses weakness in 11 introductory psychology textbooks in distinguishing between the terms proactive behavior and negative transfer. Negative transfer relates to a detrimental effect of prior experience on the learning of a new task, whereas proactive interference concerns a detrimental affect of prior interference on the recall of a second…
Nielson, Kristy A; Correro, Anthony N
2017-10-01
The Deese-Roediger-McDermott (DRM) paradigm examines false memory by introducing words associated with a non-presented 'critical lure' as memoranda, which typically causes the lures to be remembered as frequently as studied words. Our prior work has shown enhanced veridical memory and reduced misinformation effects when arousal is induced after learning (i.e., during memory consolidation). These effects have not been examined in the DRM task, or with signal detection analysis, which can elucidate the mechanisms underlying memory alterations. Thus, 130 subjects studied and then immediately recalled six DRM lists, one after another, and then watched a 3-min arousing (n=61) or neutral (n=69) video. Recognition tested 70min later showed that arousal induced after learning led to better delayed discrimination of studied words from (a) critical lures, and (b) other non-presented 'weak associates.' Furthermore, arousal reduced liberal response bias (i.e., the tendency toward accepting dubious information) for studied words relative to all foils, including critical lures and 'weak associates.' Thus, arousal induced after learning effectively increased the distinction between signal and noise by enhancing access to verbatim information and reducing endorsement of dubious information. These findings provide important insights into the cognitive mechanisms by which arousal modulates early memory consolidation processes. Copyright © 2017 Elsevier Inc. All rights reserved.
A Bayesian model averaging approach with non-informative priors for cost-effectiveness analyses.
Conigliani, Caterina
2010-07-20
We consider the problem of assessing new and existing technologies for their cost-effectiveness in the case where data on both costs and effects are available from a clinical trial, and we address it by means of the cost-effectiveness acceptability curve. The main difficulty in these analyses is that cost data usually exhibit highly skew and heavy-tailed distributions, so that it can be extremely difficult to produce realistic probabilistic models for the underlying population distribution. Here, in order to integrate the uncertainty about the model into the analysis of cost data and into cost-effectiveness analyses, we consider an approach based on Bayesian model averaging (BMA) in the particular case of weak prior informations about the unknown parameters of the different models involved in the procedure. The main consequence of this assumption is that the marginal densities required by BMA are undetermined. However, in accordance with the theory of partial Bayes factors and in particular of fractional Bayes factors, we suggest replacing each marginal density with a ratio of integrals that can be efficiently computed via path sampling. Copyright (c) 2010 John Wiley & Sons, Ltd.
Daube, Jasper R; Sorenson, Eric J; Windebank, Anthony J
2009-01-01
Poliomyelitis is a monophasic illness affecting lower motor neurons and individuals may describe new problems years after the initial weakness. We have studied 38 people with the post-polio syndrome over a 15-year period assessing a number of neuromuscular measures, including motor unit number estimation (MUNE). Twenty-five individuals reported progressive weakness but there was no objective change in MUNE and other measures. There was an association with reported weakness and initial deficits. There was a slow decline in MUNE values over time in both groups.
Deformable segmentation via sparse representation and dictionary learning.
Zhang, Shaoting; Zhan, Yiqiang; Metaxas, Dimitris N
2012-10-01
"Shape" and "appearance", the two pillars of a deformable model, complement each other in object segmentation. In many medical imaging applications, while the low-level appearance information is weak or mis-leading, shape priors play a more important role to guide a correct segmentation, thanks to the strong shape characteristics of biological structures. Recently a novel shape prior modeling method has been proposed based on sparse learning theory. Instead of learning a generative shape model, shape priors are incorporated on-the-fly through the sparse shape composition (SSC). SSC is robust to non-Gaussian errors and still preserves individual shape characteristics even when such characteristics is not statistically significant. Although it seems straightforward to incorporate SSC into a deformable segmentation framework as shape priors, the large-scale sparse optimization of SSC has low runtime efficiency, which cannot satisfy clinical requirements. In this paper, we design two strategies to decrease the computational complexity of SSC, making a robust, accurate and efficient deformable segmentation system. (1) When the shape repository contains a large number of instances, which is often the case in 2D problems, K-SVD is used to learn a more compact but still informative shape dictionary. (2) If the derived shape instance has a large number of vertices, which often appears in 3D problems, an affinity propagation method is used to partition the surface into small sub-regions, on which the sparse shape composition is performed locally. Both strategies dramatically decrease the scale of the sparse optimization problem and hence speed up the algorithm. Our method is applied on a diverse set of biomedical image analysis problems. Compared to the original SSC, these two newly-proposed modules not only significant reduce the computational complexity, but also improve the overall accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ding, Zhi-yong; He, Juan; Ye, Liu
2017-02-01
A feasible scheme for protecting the Greenberger-Horne-Zeilinger (GHZ) entanglement state in non-Markovian environments is proposed. It consists of prior weak measurement on each qubit before the interaction with decoherence environments followed by post quantum measurement reversals. It is shown that both the fidelity and concurrence of the GHZ state can be effectively improved. Meanwhile, we also verified that our scenario can enhance tripartite nonlocality remarkably. In addition, the result indicates that the larger the weak measurement strength, the better the effectiveness of the scheme with the lower success probability.
Ensemble codes involving hippocampal neurons are at risk during delayed performance tests.
Hampson, R E; Deadwyler, S A
1996-11-26
Multielectrode recording techniques were used to record ensemble activity from 10 to 16 simultaneously active CA1 and CA3 neurons in the rat hippocampus during performance of a spatial delayed-nonmatch-to-sample task. Extracted sources of variance were used to assess the nature of two different types of errors that accounted for 30% of total trials. The two types of errors included ensemble "miscodes" of sample phase information and errors associated with delay-dependent corruption or disappearance of sample information at the time of the nonmatch response. Statistical assessment of trial sequences and associated "strength" of hippocampal ensemble codes revealed that miscoded error trials always followed delay-dependent error trials in which encoding was "weak," indicating that the two types of errors were "linked." It was determined that the occurrence of weakly encoded, delay-dependent error trials initiated an ensemble encoding "strategy" that increased the chances of being correct on the next trial and avoided the occurrence of further delay-dependent errors. Unexpectedly, the strategy involved "strongly" encoding response position information from the prior (delay-dependent) error trial and carrying it forward to the sample phase of the next trial. This produced a miscode type error on trials in which the "carried over" information obliterated encoding of the sample phase response on the next trial. Application of this strategy, irrespective of outcome, was sufficient to reorient the animal to the proper between trial sequence of response contingencies (nonmatch-to-sample) and boost performance to 73% correct on subsequent trials. The capacity for ensemble analyses of strength of information encoding combined with statistical assessment of trial sequences therefore provided unique insight into the "dynamic" nature of the role hippocampus plays in delay type memory tasks.
On vital aid: the why, what and how of validation
Kleywegt, Gerard J.
2009-01-01
Limitations to the data and subjectivity in the structure-determination process may cause errors in macromolecular crystal structures. Appropriate validation techniques may be used to reveal problems in structures, ideally before they are analysed, published or deposited. Additionally, such techniques may be used a posteriori to assess the (relative) merits of a model by potential users. Weak validation methods and statistics assess how well a model reproduces the information that was used in its construction (i.e. experimental data and prior knowledge). Strong methods and statistics, on the other hand, test how well a model predicts data or information that were not used in the structure-determination process. These may be data that were excluded from the process on purpose, general knowledge about macromolecular structure, information about the biological role and biochemical activity of the molecule under study or its mutants or complexes and predictions that are based on the model and that can be tested experimentally. PMID:19171968
Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J
2015-01-01
Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy. PMID:25628867
Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J
2015-01-01
Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy.
The surface latent heat flux anomalies related to major earthquake
NASA Astrophysics Data System (ADS)
Jing, Feng; Shen, Xuhui; Kang, Chunli; Xiong, Pan; Hong, Shunying
2011-12-01
SLHF (Surface Latent Heat Flux) is an atmospheric parameter, which can describe the heat released by phase changes and dependent on meteorological parameters such as surface temperature, relative humidity, wind speed etc. There is a sharp difference between the ocean surface and the land surface. Recently, many studies related to the SLHF anomalies prior to earthquakes have been developed. It has been shown that the energy exchange enhanced between coastal surface and atmosphere prior to earthquakes can increase the rate of the water-heat exchange, which will lead to an obviously increases in SLHF. In this paper, two earthquakes in 2010 (Haiti earthquake and southwest of Sumatra in Indonesia earthquake) have been analyzed using SLHF data by STD (standard deviation) threshold method. It is shows that the SLHF anomaly may occur in interpolate earthquakes or intraplate earthquakes and coastal earthquakes or island earthquakes. And the SLHF anomalies usually appear 5-6 days prior to an earthquake, then disappear quickly after the event. The process of anomaly evolution to a certain extent reflects a dynamic energy change process about earthquake preparation, that is, weak-strong-weak-disappeared.
BAG3 myofibrillar myopathy presenting with cardiomyopathy.
Konersman, Chamindra G; Bordini, Brett J; Scharer, Gunter; Lawlor, Michael W; Zangwill, Steven; Southern, James F; Amos, Louella; Geddes, Gabrielle C; Kliegman, Robert; Collins, Michael P
2015-05-01
Myofibrillar myopathies (MFMs) are a heterogeneous group of neuromuscular disorders distinguished by the pathological hallmark of myofibrillar dissolution. Most patients present in adulthood, but mutations in several genes including BCL2-associated athanogene 3 (BAG3) cause predominantly childhood-onset disease. BAG3-related MFM is particularly severe, featuring weakness, cardiomyopathy, neuropathy, and early lethality. While prior cases reported either neuromuscular weakness or concurrent weakness and cardiomyopathy at onset, we describe the first case in which cardiomyopathy and cardiac transplantation (age eight) preceded neuromuscular weakness by several years (age 12). The phenotype comprised distal weakness and severe sensorimotor neuropathy. Nerve biopsy was primarily axonal with secondary demyelinating/remyelinating changes without "giant axons." Muscle biopsy showed extensive neuropathic changes that made myopathic changes difficult to interpret. Similar to previous cases, a p.Pro209Leu mutation in exon 3 of BAG3 was found. This case underlines the importance of evaluating for MFMs in patients with combined neuromuscular weakness and cardiomyopathy. Copyright © 2015 Elsevier B.V. All rights reserved.
Bayesian Retrieval of Complete Posterior PDFs of Oceanic Rain Rate From Microwave Observations
NASA Technical Reports Server (NTRS)
Chiu, J. Christine; Petty, Grant W.
2005-01-01
This paper presents a new Bayesian algorithm for retrieving surface rain rate from Tropical Rainfall Measurements Mission (TRMM) Microwave Imager (TMI) over the ocean, along with validations against estimates from the TRMM Precipitation Radar (PR). The Bayesian approach offers a rigorous basis for optimally combining multichannel observations with prior knowledge. While other rain rate algorithms have been published that are based at least partly on Bayesian reasoning, this is believed to be the first self-contained algorithm that fully exploits Bayes Theorem to yield not just a single rain rate, but rather a continuous posterior probability distribution of rain rate. To advance our understanding of theoretical benefits of the Bayesian approach, we have conducted sensitivity analyses based on two synthetic datasets for which the true conditional and prior distribution are known. Results demonstrate that even when the prior and conditional likelihoods are specified perfectly, biased retrievals may occur at high rain rates. This bias is not the result of a defect of the Bayesian formalism but rather represents the expected outcome when the physical constraint imposed by the radiometric observations is weak, due to saturation effects. It is also suggested that the choice of the estimators and the prior information are both crucial to the retrieval. In addition, the performance of our Bayesian algorithm is found to be comparable to that of other benchmark algorithms in real-world applications, while having the additional advantage of providing a complete continuous posterior probability distribution of surface rain rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, D. J.
It is shown that a weak measurement of a quantum system produces a new state of the quantum system which depends on the prior state, as well as the (uncontrollable) measured position of the pointer variable of the weak-measurement apparatus. The result imposes a constraint on hidden-variable theories which assign a different state to a quantum system than standard quantum mechanics. The constraint means that a crypto-nonlocal hidden-variable theory can be ruled out in a more direct way than previously done.
Neutrino masses and their ordering: global data, priors and models
NASA Astrophysics Data System (ADS)
Gariazzo, S.; Archidiacono, M.; de Salas, P. F.; Mena, O.; Ternes, C. A.; Tórtola, M.
2018-03-01
We present a full Bayesian analysis of the combination of current neutrino oscillation, neutrinoless double beta decay and Cosmic Microwave Background observations. Our major goal is to carefully investigate the possibility to single out one neutrino mass ordering, namely Normal Ordering or Inverted Ordering, with current data. Two possible parametrizations (three neutrino masses versus the lightest neutrino mass plus the two oscillation mass splittings) and priors (linear versus logarithmic) are exhaustively examined. We find that the preference for NO is only driven by neutrino oscillation data. Moreover, the values of the Bayes factor indicate that the evidence for NO is strong only when the scan is performed over the three neutrino masses with logarithmic priors; for every other combination of parameterization and prior, the preference for NO is only weak. As a by-product of our Bayesian analyses, we are able to (a) compare the Bayesian bounds on the neutrino mixing parameters to those obtained by means of frequentist approaches, finding a very good agreement; (b) determine that the lightest neutrino mass plus the two mass splittings parametrization, motivated by the physical observables, is strongly preferred over the three neutrino mass eigenstates scan and (c) find that logarithmic priors guarantee a weakly-to-moderately more efficient sampling of the parameter space. These results establish the optimal strategy to successfully explore the neutrino parameter space, based on the use of the oscillation mass splittings and a logarithmic prior on the lightest neutrino mass, when combining neutrino oscillation data with cosmology and neutrinoless double beta decay. We also show that the limits on the total neutrino mass ∑ mν can change dramatically when moving from one prior to the other. These results have profound implications for future studies on the neutrino mass ordering, as they crucially state the need for self-consistent analyses which explore the best parametrization and priors, without combining results that involve different assumptions.
Cooley, Richard L.
1982-01-01
Prior information on the parameters of a groundwater flow model can be used to improve parameter estimates obtained from nonlinear regression solution of a modeling problem. Two scales of prior information can be available: (1) prior information having known reliability (that is, bias and random error structure) and (2) prior information consisting of best available estimates of unknown reliability. A regression method that incorporates the second scale of prior information assumes the prior information to be fixed for any particular analysis to produce improved, although biased, parameter estimates. Approximate optimization of two auxiliary parameters of the formulation is used to help minimize the bias, which is almost always much smaller than that resulting from standard ridge regression. It is shown that if both scales of prior information are available, then a combined regression analysis may be made.
Weak characteristic information extraction from early fault of wind turbine generator gearbox
NASA Astrophysics Data System (ADS)
Xu, Xiaoli; Liu, Xiuli
2017-09-01
Given the weak early degradation characteristic information during early fault evolution in gearbox of wind turbine generator, traditional singular value decomposition (SVD)-based denoising may result in loss of useful information. A weak characteristic information extraction based on μ-SVD and local mean decomposition (LMD) is developed to address this problem. The basic principle of the method is as follows: Determine the denoising order based on cumulative contribution rate, perform signal reconstruction, extract and subject the noisy part of signal to LMD and μ-SVD denoising, and obtain denoised signal through superposition. Experimental results show that this method can significantly weaken signal noise, effectively extract the weak characteristic information of early fault, and facilitate the early fault warning and dynamic predictive maintenance.
Self-consistency tests of large-scale dynamics parameterizations for single-column modeling
Edman, Jacob P.; Romps, David M.
2015-03-18
Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, butmore » WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Zhi-yong; School of Physics & Electronic Engineering, Fuyang Normal University, Fuyang 236037; He, Juan, E-mail: juanhe78@163.com
A feasible scheme for protecting the Greenberger–Horne–Zeilinger (GHZ) entanglement state in non-Markovian environments is proposed. It consists of prior weak measurement on each qubit before the interaction with decoherence environments followed by post quantum measurement reversals. It is shown that both the fidelity and concurrence of the GHZ state can be effectively improved. Meanwhile, we also verified that our scenario can enhance tripartite nonlocality remarkably. In addition, the result indicates that the larger the weak measurement strength, the better the effectiveness of the scheme with the lower success probability.
Ciprofloxacin and statin interaction: a cautionary tale of rhabdomyolysis.
Goldie, Fraser Charles; Brogan, Amy; Boyle, James Graham
2016-07-28
A 62-year-old woman presented to hospital, on general practitioner (GP) advice, with a 15-day history of slowly progressing muscle weakness. Results showed newly deranged liver function and creatine kinase (CK) of >24 000. Prior medical history includes previous myocardial infarction and recurrent urinary tract infection. 4 days prior to symptom onset, the patient developed typical urinary tract infection symptoms, treated with ciprofloxacin. The patient had been taking simvastatin (40 mg nocte) for 13 years and had never previously taken ciprofloxacin. Initial management included intravenous crystalloid fluids and discontinuation of simvastatin. CK level fell, liver function slowly improved and renal function remained stable. Muscle weakness improved and the patient became independently able to perform activities of daily living. While the interactions between statins and other antibiotics are well documented, the interaction between statins and ciprofloxacin is less so. The consequences of this interaction can have potentially serious outcomes. 2016 BMJ Publishing Group Ltd.
Aue, Tatjana; Chauvigné, Léa A S; Bristle, Mirko; Okon-Singer, Hadas; Guex, Raphaël
2016-12-01
Can prior expectancies shape attention to threat? To answer this question, we manipulated the expectancies of spider phobics and nonfearful controls regarding the appearance of spider and bird targets in a visual search task. We observed robust evidence for expectancy influences on attention to birds, reflected in error rates, reaction times, pupil diameter, and heart rate (HR). We found no solid effect, however, of the same expectancies on attention to spiders; only HR revealed a weak and transient impact of prior expectancies on the orientation of attention to threat. Moreover, these asymmetric effects for spiders versus birds were observed in both phobics and controls. Our results are thus consistent with the notion of a threat detection mechanism that is only partially permeable to current expectancies, thereby increasing chances of survival in situations that are mistakenly perceived as safe. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chiu, I.-Non; Umetsu, Keiichi; Sereno, Mauro; Ettori, Stefano; Meneghetti, Massimo; Merten, Julian; Sayers, Jack; Zitrin, Adi
2018-06-01
We perform a three-dimensional triaxial analysis of 16 X-ray regular and 4 high-magnification galaxy clusters selected from the CLASH survey by combining two-dimensional weak-lensing and central strong-lensing constraints. In a Bayesian framework, we constrain the intrinsic structure and geometry of each individual cluster assuming a triaxial Navarro–Frenk–White halo with arbitrary orientations, characterized by the mass {M}200{{c}}, halo concentration {c}200{{c}}, and triaxial axis ratios ({q}{{a}}≤slant {q}{{b}}), and investigate scaling relations between these halo structural parameters. From triaxial modeling of the X-ray-selected subsample, we find that the halo concentration decreases with increasing cluster mass, with a mean concentration of {c}200{{c}}=4.82+/- 0.30 at the pivot mass {M}200{{c}}={10}15{M}ȯ {h}-1. This is consistent with the result from spherical modeling, {c}200{{c}}=4.51+/- 0.14. Independently of the priors, the minor-to-major axis ratio {q}{{a}} of our full sample exhibits a clear deviation from the spherical configuration ({q}{{a}}=0.52+/- 0.04 at {10}15{M}ȯ {h}-1 with uniform priors), with a weak dependence on the cluster mass. Combining all 20 clusters, we obtain a joint ensemble constraint on the minor-to-major axis ratio of {q}{{a}}={0.652}-0.078+0.162 and a lower bound on the intermediate-to-major axis ratio of {q}{{b}}> 0.63 at the 2σ level from an analysis with uniform priors. Assuming priors on the axis ratios derived from numerical simulations, we constrain the degree of triaxiality for the full sample to be { \\mathcal T }=0.79+/- 0.03 at {10}15{M}ȯ {h}-1, indicating a preference for a prolate geometry of cluster halos. We find no statistical evidence for an orientation bias ({f}geo}=0.93+/- 0.07), which is insensitive to the priors and in agreement with the theoretical expectation for the CLASH clusters.
Investigating the impact of spatial priors on the performance of model-based IVUS elastography
Richards, M S; Doyley, M M
2012-01-01
This paper describes methods that provide pre-requisite information for computing circumferential stress in modulus elastograms recovered from vascular tissue—information that could help cardiologists detect life-threatening plaques and predict their propensity to rupture. The modulus recovery process is an ill-posed problem; therefore additional information is needed to provide useful elastograms. In this work, prior geometrical information was used to impose hard or soft constraints on the reconstruction process. We conducted simulation and phantom studies to evaluate and compare modulus elastograms computed with soft and hard constraints versus those computed without any prior information. The results revealed that (1) the contrast-to-noise ratio of modulus elastograms achieved using the soft prior and hard prior reconstruction methods exceeded those computed without any prior information; (2) the soft prior and hard prior reconstruction methods could tolerate up to 8 % measurement noise; and (3) the performance of soft and hard prior modulus elastogram degraded when incomplete spatial priors were employed. This work demonstrates that including spatial priors in the reconstruction process should improve the performance of model-based elastography, and the soft prior approach should enhance the robustness of the reconstruction process to errors in the geometrical information. PMID:22037648
Minimally Informative Prior Distributions for PSA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana L. Kelly; Robert W. Youngblood; Kurt G. Vedros
2010-06-01
A salient feature of Bayesian inference is its ability to incorporate information from a variety of sources into the inference model, via the prior distribution (hereafter simply “the prior”). However, over-reliance on old information can lead to priors that dominate new data. Some analysts seek to avoid this by trying to work with a minimally informative prior distribution. Another reason for choosing a minimally informative prior is to avoid the often-voiced criticism of subjectivity in the choice of prior. Minimally informative priors fall into two broad classes: 1) so-called noninformative priors, which attempt to be completely objective, in that themore » posterior distribution is determined as completely as possible by the observed data, the most well known example in this class being the Jeffreys prior, and 2) priors that are diffuse over the region where the likelihood function is nonnegligible, but that incorporate some information about the parameters being estimated, such as a mean value. In this paper, we compare four approaches in the second class, with respect to their practical implications for Bayesian inference in Probabilistic Safety Assessment (PSA). The most commonly used such prior, the so-called constrained noninformative prior, is a special case of the maximum entropy prior. This is formulated as a conjugate distribution for the most commonly encountered aleatory models in PSA, and is correspondingly mathematically convenient; however, it has a relatively light tail and this can cause the posterior mean to be overly influenced by the prior in updates with sparse data. A more informative prior that is capable, in principle, of dealing more effectively with sparse data is a mixture of conjugate priors. A particular diffuse nonconjugate prior, the logistic-normal, is shown to behave similarly for some purposes. Finally, we review the so-called robust prior. Rather than relying on the mathematical abstraction of entropy, as does the constrained noninformative prior, the robust prior places a heavy-tailed Cauchy prior on the canonical parameter of the aleatory model.« less
Das, Samarjit; Amoedo, Breogan; De la Torre, Fernando; Hodgins, Jessica
2012-01-01
In this paper, we propose to use a weakly supervised machine learning framework for automatic detection of Parkinson's Disease motor symptoms in daily living environments. Our primary goal is to develop a monitoring system capable of being used outside of controlled laboratory settings. Such a system would enable us to track medication cycles at home and provide valuable clinical feedback. Most of the relevant prior works involve supervised learning frameworks (e.g., Support Vector Machines). However, in-home monitoring provides only coarse ground truth information about symptom occurrences, making it very hard to adapt and train supervised learning classifiers for symptom detection. We address this challenge by formulating symptom detection under incomplete ground truth information as a multiple instance learning (MIL) problem. MIL is a weakly supervised learning framework that does not require exact instances of symptom occurrences for training; rather, it learns from approximate time intervals within which a symptom might or might not have occurred on a given day. Once trained, the MIL detector was able to spot symptom-prone time windows on other days and approximately localize the symptom instances. We monitored two Parkinson's disease (PD) patients, each for four days with a set of five triaxial accelerometers and utilized a MIL algorithm based on axis parallel rectangle (APR) fitting in the feature space. We were able to detect subject specific symptoms (e.g. dyskinesia) that conformed with a daily log maintained by the patients.
NASA Astrophysics Data System (ADS)
Petri, Andrea; May, Morgan; Haiman, Zoltán
2016-09-01
Weak gravitational lensing is becoming a mature technique for constraining cosmological parameters, and future surveys will be able to constrain the dark energy equation of state w . When analyzing galaxy surveys, redshift information has proven to be a valuable addition to angular shear correlations. We forecast parameter constraints on the triplet (Ωm,w ,σ8) for a LSST-like photometric galaxy survey, using tomography of the shear-shear power spectrum, convergence peak counts and higher convergence moments. We find that redshift tomography with the power spectrum reduces the area of the 1 σ confidence interval in (Ωm,w ) space by a factor of 8 with respect to the case of the single highest redshift bin. We also find that adding non-Gaussian information from the peak counts and higher-order moments of the convergence field and its spatial derivatives further reduces the constrained area in (Ωm,w ) by factors of 3 and 4, respectively. When we add cosmic microwave background parameter priors from Planck to our analysis, tomography improves power spectrum constraints by a factor of 3. Adding moments yields an improvement by an additional factor of 2, and adding both moments and peaks improves by almost a factor of 3 over power spectrum tomography alone. We evaluate the effect of uncorrected systematic photometric redshift errors on the parameter constraints. We find that different statistics lead to different bias directions in parameter space, suggesting the possibility of eliminating this bias via self-calibration.
Self-calibration of photometric redshift scatter in weak-lensing surveys
Zhang, Pengjie; Pen, Ue -Li; Bernstein, Gary
2010-06-11
Photo-z errors, especially catastrophic errors, are a major uncertainty for precision weak lensing cosmology. We find that the shear-(galaxy number) density and density-density cross correlation measurements between photo-z bins, available from the same lensing surveys, contain valuable information for self-calibration of the scattering probabilities between the true-z and photo-z bins. The self-calibration technique we propose does not rely on cosmological priors nor parameterization of the photo-z probability distribution function, and preserves all of the cosmological information available from shear-shear measurement. We estimate the calibration accuracy through the Fisher matrix formalism. We find that, for advanced lensing surveys such as themore » planned stage IV surveys, the rate of photo-z outliers can be determined with statistical uncertainties of 0.01-1% for z < 2 galaxies. Among the several sources of calibration error that we identify and investigate, the galaxy distribution bias is likely the most dominant systematic error, whereby photo-z outliers have different redshift distributions and/or bias than non-outliers from the same bin. This bias affects all photo-z calibration techniques based on correlation measurements. As a result, galaxy bias variations of O(0.1) produce biases in photo-z outlier rates similar to the statistical errors of our method, so this galaxy distribution bias may bias the reconstructed scatters at several-σ level, but is unlikely to completely invalidate the self-calibration technique.« less
Accelerated load testing of geosynthetic base reinforced pavement test sections.
DOT National Transportation Integrated Search
2011-02-01
The main objective of this research is to evaluate the benefits of geosynthetic stabilization and reinforcement of subgrade/base aggregate layers in flexible pavements built on weak subgrades and the effect of pre-rut pavement sections, prior to the ...
El-Gabbas, Ahmed; Dormann, Carsten F
2018-02-01
Species distribution modeling (SDM) is an essential method in ecology and conservation. SDMs are often calibrated within one country's borders, typically along a limited environmental gradient with biased and incomplete data, making the quality of these models questionable. In this study, we evaluated how adequate are national presence-only data for calibrating regional SDMs. We trained SDMs for Egyptian bat species at two different scales: only within Egypt and at a species-specific global extent. We used two modeling algorithms: Maxent and elastic net, both under the point-process modeling framework. For each modeling algorithm, we measured the congruence of the predictions of global and regional models for Egypt, assuming that the lower the congruence, the lower the appropriateness of the Egyptian dataset to describe the species' niche. We inspected the effect of incorporating predictions from global models as additional predictor ("prior") to regional models, and quantified the improvement in terms of AUC and the congruence between regional models run with and without priors. Moreover, we analyzed predictive performance improvements after correction for sampling bias at both scales. On average, predictions from global and regional models in Egypt only weakly concur. Collectively, the use of priors did not lead to much improvement: similar AUC and high congruence between regional models calibrated with and without priors. Correction for sampling bias led to higher model performance, whatever prior used, making the use of priors less pronounced. Under biased and incomplete sampling, the use of global bats data did not improve regional model performance. Without enough bias-free regional data, we cannot objectively identify the actual improvement of regional models after incorporating information from the global niche. However, we still believe in great potential for global model predictions to guide future surveys and improve regional sampling in data-poor regions.
Yu, Rongjie; Abdel-Aty, Mohamed
2013-07-01
The Bayesian inference method has been frequently adopted to develop safety performance functions. One advantage of the Bayesian inference is that prior information for the independent variables can be included in the inference procedures. However, there are few studies that discussed how to formulate informative priors for the independent variables and evaluated the effects of incorporating informative priors in developing safety performance functions. This paper addresses this deficiency by introducing four approaches of developing informative priors for the independent variables based on historical data and expert experience. Merits of these informative priors have been tested along with two types of Bayesian hierarchical models (Poisson-gamma and Poisson-lognormal models). Deviance information criterion (DIC), R-square values, and coefficients of variance for the estimations were utilized as evaluation measures to select the best model(s). Comparison across the models indicated that the Poisson-gamma model is superior with a better model fit and it is much more robust with the informative priors. Moreover, the two-stage Bayesian updating informative priors provided the best goodness-of-fit and coefficient estimation accuracies. Furthermore, informative priors for the inverse dispersion parameter have also been introduced and tested. Different types of informative priors' effects on the model estimations and goodness-of-fit have been compared and concluded. Finally, based on the results, recommendations for future research topics and study applications have been made. Copyright © 2013 Elsevier Ltd. All rights reserved.
Reese, Ellen; DeVerteuil, Geoffrey; Thach, Leanne
2010-01-01
This case study of recent efforts to deconcentrate poverty within the Skid Row area of Los Angeles examines processes of "weak-center" gentrification as it applies to a "service dependent ghetto," thus filling two key gaps in prior scholarship. We document the collaboration between the government, business and development interests, and certain non-profit agencies in this process and identify two key mechanisms of poverty deconcentration: housing/service displacement and the criminalization of low income residents. Following Harvey, we argue that these efforts are driven by pressures to find a "spatial fix" for capital accumulation through Downtown redevelopment. This process has been hotly contested, however, illustrating the strength of counter-pressures to gentrification/poverty deconcentration within "weak-center" urban areas.
Addressing potential prior-data conflict when using informative priors in proof-of-concept studies.
Mutsvari, Timothy; Tytgat, Dominique; Walley, Rosalind
2016-01-01
Bayesian methods are increasingly used in proof-of-concept studies. An important benefit of these methods is the potential to use informative priors, thereby reducing sample size. This is particularly relevant for treatment arms where there is a substantial amount of historical information such as placebo and active comparators. One issue with using an informative prior is the possibility of a mismatch between the informative prior and the observed data, referred to as prior-data conflict. We focus on two methods for dealing with this: a testing approach and a mixture prior approach. The testing approach assesses prior-data conflict by comparing the observed data to the prior predictive distribution and resorting to a non-informative prior if prior-data conflict is declared. The mixture prior approach uses a prior with a precise and diffuse component. We assess these approaches for the normal case via simulation and show they have some attractive features as compared with the standard one-component informative prior. For example, when the discrepancy between the prior and the data is sufficiently marked, and intuitively, one feels less certain about the results, both the testing and mixture approaches typically yield wider posterior-credible intervals than when there is no discrepancy. In contrast, when there is no discrepancy, the results of these approaches are typically similar to the standard approach. Whilst for any specific study, the operating characteristics of any selected approach should be assessed and agreed at the design stage; we believe these two approaches are each worthy of consideration. Copyright © 2015 John Wiley & Sons, Ltd.
Structure and morphology of submarine slab slides: clues to origin and behavior
O'Leary, Dennis W.
1991-01-01
Geologic features suggest that some slab slides probably result from long-term strength degradation of weak layers deep in the homoclinal section. Time-dependent strain in clay-rich layers can create potential slide surfaces of low frictional strength. Competent layers are weak in tension and probably fragment in the first instance of, or even prior to, translation, and the allochthonous mass is readily transformed into a high-momentum debris flow. The structure and geomorphology of slab slides provide important clues to their origin and behavior. -from Author
NASA Astrophysics Data System (ADS)
Harshavardhan, K. S.; Rajeswari, M.; Hwang, D. M.; Chen, C. Y.; Sands, T. D.; Venkatesan, T.; Tkaczyk, J. E.; Lay, K. W.; Safari, A.; Johnson, L.
1992-12-01
Anisotropic surface texturing of the polycrystalline yttria-stabilized zirconia substrates, prior to YBa2Cu3O(7-x) film deposition, is shown to promote in-plane (basal plane) ordering of the film growth in addition to the c-axis texturing. The Jc's of the films in the weak-link-dominated low-field regime are enhanced considerably, and this result is attributed to the reduction of weak links resulting from a reduction in the number of in-plane large-angle grain boundaries.
Attrition and success rates of accelerated students in nursing courses: a systematic review.
Doggrell, Sheila Anne; Schaffer, Sally
2016-01-01
There is a comprehensive literature on the academic outcomes (attrition and success) of students in traditional/baccalaureate nursing programs, but much less is known about the academic outcomes of students in accelerated nursing programs. The aim of this systematic review is to report on the attrition and success rates (either internal examination or NCLEX-RN) of accelerated students, compared to traditional students. For the systematic review, the databases (Pubmed, Cinahl and PsychINFO) and Google Scholar were searched using the search terms 'accelerated' or 'accreditation for prior learning', 'fast-track' or 'top up' and 'nursing' with 'attrition' or 'retention' or 'withdrawal' or 'success' from 1994 to January 2016. All relevant articles were included, regardless of quality. The findings of 19 studies of attrition rates and/or success rates for accelerated students are reported. For international accelerated students, there were only three studies, which are heterogeneous, and have major limitations. One of three studies has lower attrition rates, and one has shown higher success rates, than traditional students. In contrast, another study has shown high attrition and low success for international accelerated students. For graduate accelerated students, most of the studies are high quality, and showed that they have rates similar or better than traditional students. Thus, five of six studies have shown similar or lower attrition rates. Four of these studies with graduate accelerated students and an additional seven studies of success rates only, have shown similar or better success rates, than traditional students. There are only three studies of non-university graduate accelerated students, and these had weaknesses, but were consistent in reporting higher attrition rates than traditional students. The paucity and weakness of information available makes it unclear as to the attrition and/or success of international accelerated students in nursing programs. The good information available suggests that accelerated programs may be working reasonably well for the graduate students. However, the limited information available for non-university graduate students is weak, but consistent, in suggesting they may struggle in accelerated courses. Further studies are needed to determine the attrition and success rates of accelerated students, particularly for international and non-university graduate students.
Hepatic neosporosis in a dog treated for pemphigus foliaceus
USDA-ARS?s Scientific Manuscript database
A 4 year old, female, spayed Border Collie was presented for progressive lethargy, inappetence, and weakness of four days duration. The animal had been diagnosed with pemphigus foliaceus three months prior and was receiving combination immunosuppressive therapy. Serum biochemistry revealed severely ...
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323
Comment on ‘Oxygen vacancy-induced magnetic moment in edge-sharing CuO2 chains of Li2CuO2’
NASA Astrophysics Data System (ADS)
Kuzian, R. O.; Klingeler, R.; Lorenz, W. E. A.; Wizent, N.; Nishimoto, S.; Nitzsche, U.; Rosner, H.; Milosavljevic, D.; Hozoi, L.; Yadav, R.; Richter, J.; Hauser, A.; Geck, J.; Hayn, R.; Yushankhai, V.; Siurakshina, L.; Monney, C.; Schmitt, T.; Thar, J.; Roth, G.; Ito, T.; Yamaguchi, H.; Matsuda, M.; Johnston, S.; Málek, J.; Drechsler, S.-L.
2018-05-01
In a recent work devoted to the magnetism of Li2CuO2, Shu et al (2017 New J. Phys. 19, 023026) have proposed a ‘simplified’ unfrustrated microscopic model that differs considerably from the models refined through decades of prior work. We show that the proposed model is at odds with known experimental data, including the reported magnetic susceptibility χ(T) data up to 550 K. Using an 8th order high-temperature expansion for χ(T), we show that the experimental data for Li2CuO2 are consistent with the prior model derived from inelastic neutron scattering studies. We also establish the T-range of validity for a Curie–Weiss law for the real frustrated magnetic system. We argue that the knowledge of the long-range ordered magnetic structure for T < T N and of χ(T) in a restricted T-range provides insufficient information to extract all of the relevant couplings in frustrated magnets; the saturation field and INS data must also be used to determine several exchange couplings, including the weak but decisive frustrating antiferromagnetic interchain couplings.
Lee, Tian-Fu
2013-12-01
A smartcard-based authentication and key agreement scheme for telecare medicine information systems enables patients, doctors, nurses and health visitors to use smartcards for secure login to medical information systems. Authorized users can then efficiently access remote services provided by the medicine information systems through public networks. Guo and Chang recently improved the efficiency of a smartcard authentication and key agreement scheme by using chaotic maps. Later, Hao et al. reported that the scheme developed by Guo and Chang had two weaknesses: inability to provide anonymity and inefficient double secrets. Therefore, Hao et al. proposed an authentication scheme for telecare medicine information systems that solved these weaknesses and improved performance. However, a limitation in both schemes is their violation of the contributory property of key agreements. This investigation discusses these weaknesses and proposes a new smartcard-based authentication and key agreement scheme that uses chaotic maps for telecare medicine information systems. Compared to conventional schemes, the proposed scheme provides fewer weaknesses, better security, and more efficiency.
Bakken, Inger J; Tveito, Kari; Aaberg, Kari M; Ghaderi, Sara; Gunnes, Nina; Trogstad, Lill; Magnus, Per; Stoltenberg, Camilla; Håberg, Siri E
2016-09-02
Chronic fatigue syndrome/myalgic encephalomyelitis (CFS/ME) is a complex condition. Causal factors are not established, although underlying psychological or immunological susceptibility has been proposed. We studied primary care diagnoses for children with CFS/ME, with children with another hospital diagnosis (type 1 diabetes mellitus [T1DM]) and the general child population as comparison groups. All Norwegian children born 1992-2012 constituted the study sample. Children with CFS/ME (n = 1670) or T1DM (n = 4937) were identified in the Norwegian Patient Register (NPR) (2008-2014). Children without either diagnosis constituted the general child population comparison group (n = 1337508). We obtained information on primary care diagnoses from the Norwegian Directorate of Health. For each primary care diagnosis, the proportion and 99 % confidence interval (CI) within the three groups was calculated, adjusted for sex and age by direct standardization. Children with CFS/ME were more often registered with a primary care diagnosis of weakness/general tiredness (89.9 % [99 % CI 88.0 to 91.8 %]) than children in either comparison group (T1DM: 14.5 % [99 % CI: 13.1 to 16.0 %], general child population: 11.1 % [99 % CI: 11.0 to 11.2 %]). Also, depressive disorder and anxiety disorder were more common in the CFS/ME group, as were migraine, muscle pain, and infections. In the 2 year period prior to the diagnoses, infectious mononucleosis was registered for 11.1 % (99 % CI 9.1 to 13.1 %) of children with CFS/ME and for 0.5 % (99 % CI (0.2 to 0.8 %) of children with T1DM. Of children with CFS/ME, 74.6 % (1292/1670) were registered with a prior primary care diagnosis of weakness / general tiredness. The time span from the first primary care diagnosis of weakness / general tiredness to the specialist health care diagnosis of CFS/ME was 1 year or longer for 47.8 %. This large nationwide registry linkage study confirms that the clinical picture in CFS/ME is complex. Children with CFS/ME were frequently diagnosed with infections, supporting the hypothesis that infections may be involved in the causal pathway. The long time span often observed from the first diagnosis of weakness / general tiredness to the diagnosis of CFS/ME might indicate that the treatment of these patients is sometimes not optimal.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-05
... BFT by 56.5 percent. Although limited information exists about the effects of weak hooks on BFT post- release mortality, post-release mortality is expected to be reduced because BFT likely straighten the weak... information will aid in further understanding more precisely the effects of weak hook use on BFT post-release...
Influence of inter-item symmetry in visual search.
Roggeveen, Alexa B; Kingstone, Alan; Enns, James T
2004-01-01
Does visual search involve a serial inspection of individual items (Feature Integration Theory) or are items grouped and segregated prior to their consideration as a possible target (Attentional Engagement Theory)? For search items defined by motion and shape there is strong support for prior grouping (Kingstone and Bischof, 1999). The present study tested for grouping based on inter-item shape symmetry. Results showed that target-distractor symmetry strongly influenced search whereas distractor-distractor symmetry influenced search more weakly. This indicates that static shapes are evaluated for similarity to one another prior to their explicit identification as 'target' or 'distractor'. Possible reasons for the unequal contributions of target-distractor and distractor-distractor relations are discussed.
Bhattacharya, Abhishek; Dunson, David B.
2012-01-01
This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels. PMID:22984295
A Method for Constructing Informative Priors for Bayesian Modeling of Occupational Hygiene Data.
Quick, Harrison; Huynh, Tran; Ramachandran, Gurumurthy
2017-01-01
In many occupational hygiene settings, the demand for more accurate, more precise results is at odds with limited resources. To combat this, practitioners have begun using Bayesian methods to incorporate prior information into their statistical models in order to obtain more refined inference from their data. This is not without risk, however, as incorporating prior information that disagrees with the information contained in data can lead to spurious conclusions, particularly if the prior is too informative. In this article, we propose a method for constructing informative prior distributions for normal and lognormal data that are intuitive to specify and robust to bias. To demonstrate the use of these priors, we walk practitioners through a step-by-step implementation of our priors using an illustrative example. We then conclude with recommendations for general use. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Evaluating arguments during instigations of defence motivation and accuracy motivation.
Liu, Cheng-Hong
2017-05-01
When people evaluate the strength of an argument, their motivations are likely to influence the evaluation. However, few studies have specifically investigated the influences of motivational factors on argument evaluation. This study examined the effects of defence and accuracy motivations on argument evaluation. According to the compatibility between the advocated positions of arguments and participants' prior beliefs and the objective strength of arguments, participants evaluated four types of arguments: compatible-strong, compatible-weak, incompatible-strong, and incompatible-weak arguments. Experiment 1 revealed that participants possessing a high defence motivation rated compatible-weak arguments as stronger and incompatible-strong ones as weaker than participants possessing a low defence motivation. However, the strength ratings between the high and low defence groups regarding both compatible-strong and incompatible-weak arguments were similar. Experiment 2 revealed that when participants possessed a high accuracy motivation, they rated compatible-weak arguments as weaker and incompatible-strong ones as stronger than when they possessed a low accuracy motivation. However, participants' ratings on both compatible-strong and incompatible-weak arguments were similar when comparing high and low accuracy conditions. The results suggest that defence and accuracy motivations are two major motives influencing argument evaluation. However, they primarily influence the evaluation results for compatible-weak and incompatible-strong arguments, but not for compatible-strong and incompatible-weak arguments. © 2016 The British Psychological Society.
Prospective memory, personality, and individual differences.
Uttl, Bob; White, Carmela A; Wong Gonzalez, Daniela; McDouall, Joanna; Leonard, Carrie A
2013-01-01
A number of studies investigating the relationship between personality and prospective memory (ProM) have appeared during the last decade. However, a review of these studies reveals little consistency in their findings and conclusions. To clarify the relationship between ProM and personality, we conducted two studies: a meta-analysis of prior research investigating the relationships between ProM and personality, and a study with 378 participants examining the relationships between ProM, personality, verbal intelligence, and retrospective memory. Our review of prior research revealed great variability in the measures used to assess ProM, and in the methodological quality of prior research; these two factors may partially explain inconsistent findings in the literature. Overall, the meta-analysis revealed very weak correlations (rs ranging from 0.09 to 0.10) between ProM and three of the Big Five factors: Openness, Conscientiousness, and Agreeableness. Our experimental study showed that ProM performance was related to individual differences such as verbal intelligence as well as to personality factors and that the relationship between ProM and personality factors depends on the ProM subdomain. In combination, the two studies suggest that ProM performance is relatively weakly related to personality factors and more strongly related to individual differences in cognitive factors.
Prospective Memory, Personality, and Individual Differences
Uttl, Bob; White, Carmela A.; Wong Gonzalez, Daniela; McDouall, Joanna; Leonard, Carrie A.
2012-01-01
A number of studies investigating the relationship between personality and prospective memory (ProM) have appeared during the last decade. However, a review of these studies reveals little consistency in their findings and conclusions. To clarify the relationship between ProM and personality, we conducted two studies: a meta-analysis of prior research investigating the relationships between ProM and personality, and a study with 378 participants examining the relationships between ProM, personality, verbal intelligence, and retrospective memory. Our review of prior research revealed great variability in the measures used to assess ProM, and in the methodological quality of prior research; these two factors may partially explain inconsistent findings in the literature. Overall, the meta-analysis revealed very weak correlations (rs ranging from 0.09 to 0.10) between ProM and three of the Big Five factors: Openness, Conscientiousness, and Agreeableness. Our experimental study showed that ProM performance was related to individual differences such as verbal intelligence as well as to personality factors and that the relationship between ProM and personality factors depends on the ProM subdomain. In combination, the two studies suggest that ProM performance is relatively weakly related to personality factors and more strongly related to individual differences in cognitive factors. PMID:23525147
NASA Astrophysics Data System (ADS)
Lu, Weizhao; Huang, Chunhui; Hou, Kun; Shi, Liting; Zhao, Huihui; Li, Zhengmei; Qiu, Jianfeng
2018-05-01
In continuous-variable quantum key distribution (CV-QKD), weak signal carrying information transmits from Alice to Bob; during this process it is easily influenced by unknown noise which reduces signal-to-noise ratio, and strongly impacts reliability and stability of the communication. Recurrent quantum neural network (RQNN) is an artificial neural network model which can perform stochastic filtering without any prior knowledge of the signal and noise. In this paper, a modified RQNN algorithm with expectation maximization algorithm is proposed to process the signal in CV-QKD, which follows the basic rule of quantum mechanics. After RQNN, noise power decreases about 15 dBm, coherent signal recognition rate of RQNN is 96%, quantum bit error rate (QBER) drops to 4%, which is 6.9% lower than original QBER, and channel capacity is notably enlarged.
Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.
2015-01-01
In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152
Marginally specified priors for non-parametric Bayesian estimation
Kessler, David C.; Hoff, Peter D.; Dunson, David B.
2014-01-01
Summary Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables. PMID:25663813
ERIC Educational Resources Information Center
Brennan, Tim
1980-01-01
A review of prior classification systems of runaways is followed by a descriptive taxonomy of runaways developed using cluster-analytic methods. The empirical types illustrate patterns of weakness in bonds between runaways and families, schools, or peer relationships. (Author)
Sun Exposure and Melanoma Survival: A GEM Study
Berwick, Marianne; Reiner, Anne S.; Paine, Susan; Armstrong, Bruce K.; Kricker, Anne; Goumas, Chris; Cust, Anne E.; Thomas, Nancy E.; Groben, Pamela A.; From, Lynn; Busam, Klaus; Orlow, Irene; Marrett, Loraine D.; Gallagher, Richard P.; Gruber, Stephen B.; Anton-Culver, Hoda; Rosso, Stefano; Zanetti, Roberto; Kanetsky, Peter A.; Dwyer, Terry; Venn, Alison; Lee-Taylor, Julia; Begg, Colin B.
2014-01-01
Background We previously reported a significant association between higher ultraviolet radiation exposure before diagnosis and greater survival with melanoma in a population-based study in Connecticut. We sought to evaluate the hypothesis that sun exposure prior to diagnosis was associated with greater survival in a larger, international population-based study with more detailed exposure information. Methods We conducted a multi-center, international population-based study in four countries – Australia, Italy, Canada and the United States – with 3,578 cases of melanoma with an average of 7.4 years of follow-up. Measures of sun exposure included sunburn, intermittent exposure, hours of holiday sun exposure, hours of water-related outdoor activities, ambient UVB dose, histological solar elastosis and season of diagnosis. Results Results were not strongly supportive of the earlier hypothesis. Having had any sunburn in one year within 10 years of diagnosis was inversely associated with survival; solar elastosis – a measure of lifetime cumulative exposure – was not. Additionally, none of the intermittent exposure measures – water related activities and sunny holidays - were associated with melanoma-specific survival. Estimated ambient UVB dose was not associated with survival. Conclusion Although there was an apparent protective effect of sunburns within 10 years of diagnosis, there was only weak evidence in this large, international, population-based study of melanoma that sun exposure prior to diagnosis is associated with greater melanoma-specific survival. Impact This study adds to the evidence that sun exposure prior to melanoma diagnosis has little effect on survival with melanoma. PMID:25069694
Determining informative priors for cognitive models.
Lee, Michael D; Vanpaemel, Wolf
2018-02-01
The development of cognitive models involves the creative scientific formalization of assumptions, based on theory, observation, and other relevant information. In the Bayesian approach to implementing, testing, and using cognitive models, assumptions can influence both the likelihood function of the model, usually corresponding to assumptions about psychological processes, and the prior distribution over model parameters, usually corresponding to assumptions about the psychological variables that influence those processes. The specification of the prior is unique to the Bayesian context, but often raises concerns that lead to the use of vague or non-informative priors in cognitive modeling. Sometimes the concerns stem from philosophical objections, but more often practical difficulties with how priors should be determined are the stumbling block. We survey several sources of information that can help to specify priors for cognitive models, discuss some of the methods by which this information can be formalized in a prior distribution, and identify a number of benefits of including informative priors in cognitive modeling. Our discussion is based on three illustrative cognitive models, involving memory retention, categorization, and decision making.
Teaching Web 2.0 technologies using Web 2.0 technologies.
Rethlefsen, Melissa L; Piorun, Mary; Prince, J Dale
2009-10-01
The research evaluated participant satisfaction with the content and format of the "Web 2.0 101: Introduction to Second Generation Web Tools" course and measured the impact of the course on participants' self-evaluated knowledge of Web 2.0 tools. The "Web 2.0 101" online course was based loosely on the Learning 2.0 model. Content was provided through a course blog and covered a wide range of Web 2.0 tools. All Medical Library Association members were invited to participate. Participants were asked to complete a post-course survey. Respondents who completed the entire course or who completed part of the course self-evaluated their knowledge of nine social software tools and concepts prior to and after the course using a Likert scale. Additional qualitative information about course strengths and weaknesses was also gathered. Respondents' self-ratings showed a significant change in perceived knowledge for each tool, using a matched pair Wilcoxon signed rank analysis (P<0.0001 for each tool/concept). Overall satisfaction with the course appeared high. Hands-on exercises were the most frequently identified strength of the course; the length and time-consuming nature of the course were considered weaknesses by some. Learning 2.0-style courses, though demanding time and self-motivation from participants, can increase knowledge of Web 2.0 tools.
Petri, Andrea; May, Morgan; Haiman, Zoltán
2016-09-30
Weak gravitational lensing is becoming a mature technique for constraining cosmological parameters, and future surveys will be able to constrain the dark energy equation of state w. When analyzing galaxy surveys, redshift information has proven to be a valuable addition to angular shear correlations. We forecast parameter constraints on the triplet (Ω m,w,σ 8) for a LSST-like photometric galaxy survey, using tomography of the shear-shear power spectrum, convergence peak counts and higher convergence moments. Here we find that redshift tomography with the power spectrum reduces the area of the 1σ confidence interval in (Ω m,w) space by a factor ofmore » 8 with respect to the case of the single highest redshift bin. We also find that adding non-Gaussian information from the peak counts and higher-order moments of the convergence field and its spatial derivatives further reduces the constrained area in (Ω m,w) by factors of 3 and 4, respectively. When we add cosmic microwave background parameter priors from Planck to our analysis, tomography improves power spectrum constraints by a factor of 3. Adding moments yields an improvement by an additional factor of 2, and adding both moments and peaks improves by almost a factor of 3 over power spectrum tomography alone. We evaluate the effect of uncorrected systematic photometric redshift errors on the parameter constraints. In conclusion, we find that different statistics lead to different bias directions in parameter space, suggesting the possibility of eliminating this bias via self-calibration.« less
20180312 - Applying a High-Throughput PBTK Model for IVIVE (SOT)
The ability to link in vitro and in vivo toxicity enables the use of high-throughput in vitro assays as an alternative to resource intensive animal studies. Toxicokinetics (TK) should help describe this link, but prior work found weak correlation when using a TK model for in vitr...
Exploratory Study of the Relationship between State Fiscal Effort and Academic Achievement
ERIC Educational Resources Information Center
Goodale, Timothy A.
2009-01-01
Prior empirical research has taken many varying approaches to determine if differences in funding significantly impacts student academic achievement. However, much of these studies exhibit weak generalizability due to their limited scope, timeframe and dissimilar achievement measures. To expand upon the already robust literature in education…
Method for welding chromium molybdenum steels
Sikka, Vinod K.
1986-01-01
Chromium-molybdenum steels exhibit a weakening after welding in an area adjacent to the weld. This invention is an improved method for welding to eliminate the weakness by subjecting normalized steel to a partial temper prior to welding and subsequently fully tempering the welded article for optimum strength and ductility.
Applying a High-Throughput PBTK Model for IVIVE
The ability to link in vitro and in vivo toxicity enables the use of high-throughput in vitro assays as an alternative to resource intensive animal studies. Toxicokinetics (TK) should help describe this link, but prior work found weak correlation when using a TK model for in vitr...
Effects of prior information on decoding degraded speech: an fMRI study.
Clos, Mareike; Langner, Robert; Meyer, Martin; Oechslin, Mathias S; Zilles, Karl; Eickhoff, Simon B
2014-01-01
Expectations and prior knowledge are thought to support the perceptual analysis of incoming sensory stimuli, as proposed by the predictive-coding framework. The current fMRI study investigated the effect of prior information on brain activity during the decoding of degraded speech stimuli. When prior information enabled the comprehension of the degraded sentences, the left middle temporal gyrus and the left angular gyrus were activated, highlighting a role of these areas in meaning extraction. In contrast, the activation of the left inferior frontal gyrus (area 44/45) appeared to reflect the search for meaningful information in degraded speech material that could not be decoded because of mismatches with the prior information. Our results show that degraded sentences evoke instantaneously different percepts and activation patterns depending on the type of prior information, in line with prediction-based accounts of perception. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Xu, Robert S.; Michailovich, Oleg V.; Solovey, Igor; Salama, Magdy M. A.
2010-03-01
Prostate specific antigen density is an established parameter for indicating the likelihood of prostate cancer. To this end, the size and volume of the gland have become pivotal quantities used by clinicians during the standard cancer screening process. As an alternative to manual palpation, an increasing number of volume estimation methods are based on the imagery data of the prostate. The necessity to process large volumes of such data requires automatic segmentation algorithms, which can accurately and reliably identify the true prostate region. In particular, transrectal ultrasound (TRUS) imaging has become a standard means of assessing the prostate due to its safe nature and high benefit-to-cost ratio. Unfortunately, modern TRUS images are still plagued by many ultrasound imaging artifacts such as speckle noise and shadowing, which results in relatively low contrast and reduced SNR of the acquired images. Consequently, many modern segmentation methods incorporate prior knowledge about the prostate geometry to enhance traditional segmentation techniques. In this paper, a novel approach to the problem of TRUS segmentation, particularly the definition of the prostate shape prior, is presented. The proposed approach is based on the concept of distribution tracking, which provides a unified framework for tracking both photometric and morphological features of the prostate. In particular, the tracking of morphological features defines a novel type of "weak" shape priors. The latter acts as a regularization force, which minimally bias the segmentation procedure, while rendering the final estimate stable and robust. The value of the proposed methodology is demonstrated in a series of experiments.
NASA Astrophysics Data System (ADS)
Ghosh, Subhajit; Bose, Santanu; Mandal, Nibir; Das, Animesh
2018-03-01
This study integrates field evidence with laboratory experiments to show the mechanical effects of a lithologically contrasting stratigraphic sequence on the development of frontal thrusts: Main Boundary Thrust (MBT) and Daling Thrust (DT) in the Darjeeling-Sikkim Himalaya (DSH). We carried out field investigations mainly along two river sections in the DSH: Tista-Kalijhora and Mahanadi, covering an orogen-parallel stretch of 20 km. Our field observations suggest that the coal-shale dominated Gondwana sequence (sandwiched between the Daling Group in the north and Siwaliks in the south) has acted as a mechanically weak horizon to localize the MBT and DT. We simulated a similar mechanical setting in scaled model experiments to validate our field interpretation. In experiments, such a weak horizon at a shallow depth perturbs the sequential thrust progression, and causes a thrust to localize in the vicinity of the weak zone, splaying from the basal detachment. We correlate this weak-zone-controlled thrust with the DT, which accommodates a large shortening prior to activation of the weak zone as a new detachment with ongoing horizontal shortening. The entire shortening in the model is then transferred to this shallow detachment to produce a new sequence of thrust splays. Extrapolating this model result to the natural prototype, we show that the mechanically weak Gondwana Sequence has caused localization of the DT and MBT in the mountain front of DSH.
Bayesian generalized linear mixed modeling of Tuberculosis using informative priors.
Ojo, Oluwatobi Blessing; Lougue, Siaka; Woldegerima, Woldegebriel Assefa
2017-01-01
TB is rated as one of the world's deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014.
Methods and Models for the Construction of Weakly Parallel Tests. Research Report 90-4.
ERIC Educational Resources Information Center
Adema, Jos J.
Methods are proposed for the construction of weakly parallel tests, that is, tests with the same test information function. A mathematical programing model for constructing tests with a prespecified test information function and a heuristic for assigning items to tests such that their information functions are equal play an important role in the…
NASA Astrophysics Data System (ADS)
Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.
2018-05-01
Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.
A Regions of Confidence Based Approach to Enhance Segmentation with Shape Priors.
Appia, Vikram V; Ganapathy, Balaji; Abufadel, Amer; Yezzi, Anthony; Faber, Tracy
2010-01-18
We propose an improved region based segmentation model with shape priors that uses labels of confidence/interest to exclude the influence of certain regions in the image that may not provide useful information for segmentation. These could be regions in the image which are expected to have weak, missing or corrupt edges or they could be regions in the image which the user is not interested in segmenting, but are part of the object being segmented. In the training datasets, along with the manual segmentations we also generate an auxiliary map indicating these regions of low confidence/interest. Since, all the training images are acquired under similar conditions, we can train our algorithm to estimate these regions as well. Based on this training we will generate a map which indicates the regions in the image that are likely to contain no useful information for segmentation. We then use a parametric model to represent the segmenting curve as a combination of shape priors obtained by representing the training data as a collection of signed distance functions. We evolve an objective energy functional to evolve the global parameters that are used to represent the curve. We vary the influence each pixel has on the evolution of these parameters based on the confidence/interest label. When we use these labels to indicate the regions with low confidence; the regions containing accurate edges will have a dominant role in the evolution of the curve and the segmentation in the low confidence regions will be approximated based on the training data. Since our model evolves global parameters, it improves the segmentation even in the regions with accurate edges. This is because we eliminate the influence of the low confidence regions which may mislead the final segmentation. Similarly when we use the labels to indicate the regions which are not of importance, we will get a better segmentation of the object in the regions we are interested in.
Code of Federal Regulations, 2011 CFR
2011-04-01
... have received confirmation of a prior notice from FDA? 1.282 Section 1.282 Food and Drugs FOOD AND DRUG... changes after you have received confirmation of a prior notice from FDA? (a)(1) If any of the information... information), changes after you receive notice that FDA has confirmed your prior notice submission for review...
Code of Federal Regulations, 2012 CFR
2012-04-01
... have received confirmation of a prior notice from FDA? 1.282 Section 1.282 Food and Drugs FOOD AND DRUG... changes after you have received confirmation of a prior notice from FDA? (a)(1) If any of the information... information), changes after you receive notice that FDA has confirmed your prior notice submission for review...
Code of Federal Regulations, 2013 CFR
2013-04-01
... have received confirmation of a prior notice from FDA? 1.282 Section 1.282 Food and Drugs FOOD AND DRUG... changes after you have received confirmation of a prior notice from FDA? (a)(1) If any of the information... information), changes after you receive notice that FDA has confirmed your prior notice submission for review...
Code of Federal Regulations, 2014 CFR
2014-04-01
... have received confirmation of a prior notice from FDA? 1.282 Section 1.282 Food and Drugs FOOD AND DRUG... changes after you have received confirmation of a prior notice from FDA? (a)(1) If any of the information... information), changes after you receive notice that FDA has confirmed your prior notice submission for review...
Code of Federal Regulations, 2010 CFR
2010-04-01
... have received confirmation of a prior notice from FDA? 1.282 Section 1.282 Food and Drugs FOOD AND DRUG... changes after you have received confirmation of a prior notice from FDA? (a)(1) If any of the information... information), changes after you receive notice that FDA has confirmed your prior notice submission for review...
Huynh, Glen A; Lee, Audrey J
2017-12-01
A 91-year-old male was admitted to the hospital for worsening muscle weakness, muscle pain, and unexplained soreness for the past 10 days. Four months prior to his admission, the patient had experienced a myocardial infarction and was initiated on atorvastatin 80 mg daily. Although the provider had instructed the patient to decrease the atorvastatin dose to 40 mg daily 3 months prior to admission, the patient did not adhere to the lower dose regimen until 10 days prior to hospitalization. Upon admission, the patient presented with muscle weakness and pain, a serum creatinine phosphokinase of 18 723 U/L, and a serum creatinine of 1.6 mg/dL. The atorvastatin dose was held and the patient was treated with intravenous fluids. The 2013 American College of Cardiology and American Heart Association Blood Cholesterol Practice Guidelines recommend the use of moderate-intensity statins in patients older than 75 years to prevent myopathy. However, in clinical practice, aggressive statin therapy is often prescribed for significant coronary disease. Prescribing high-intensity statins for patients with advanced age, such as this case, may increase the risk of rhabdomyolysis and other complications. This case report suggests that providers should avoid or be cautious with initiating high-intensity atorvastatin in elderly patients over 75 years to minimize the risk of rhabdomyolysis.
Shehla, Romana; Khan, Athar Ali
2016-01-01
Models with bathtub-shaped hazard function have been widely accepted in the field of reliability and medicine and are particularly useful in reliability related decision making and cost analysis. In this paper, the exponential power model capable of assuming increasing as well as bathtub-shape, is studied. This article makes a Bayesian study of the same model and simultaneously shows how posterior simulations based on Markov chain Monte Carlo algorithms can be straightforward and routine in R. The study is carried out for complete as well as censored data, under the assumption of weakly-informative priors for the parameters. In addition to this, inference interest focuses on the posterior distribution of non-linear functions of the parameters. Also, the model has been extended to include continuous explanatory variables and R-codes are well illustrated. Two real data sets are considered for illustrative purposes.
A Flexible Hierarchical Bayesian Modeling Technique for Risk Analysis of Major Accidents.
Yu, Hongyang; Khan, Faisal; Veitch, Brian
2017-09-01
Safety analysis of rare events with potentially catastrophic consequences is challenged by data scarcity and uncertainty. Traditional causation-based approaches, such as fault tree and event tree (used to model rare event), suffer from a number of weaknesses. These include the static structure of the event causation, lack of event occurrence data, and need for reliable prior information. In this study, a new hierarchical Bayesian modeling based technique is proposed to overcome these drawbacks. The proposed technique can be used as a flexible technique for risk analysis of major accidents. It enables both forward and backward analysis in quantitative reasoning and the treatment of interdependence among the model parameters. Source-to-source variability in data sources is also taken into account through a robust probabilistic safety analysis. The applicability of the proposed technique has been demonstrated through a case study in marine and offshore industry. © 2017 Society for Risk Analysis.
Weak-value amplification as an optimal metrological protocol
NASA Astrophysics Data System (ADS)
Alves, G. Bié; Escher, B. M.; de Matos Filho, R. L.; Zagury, N.; Davidovich, L.
2015-06-01
The implementation of weak-value amplification requires the pre- and postselection of states of a quantum system, followed by the observation of the response of the meter, which interacts weakly with the system. Data acquisition from the meter is conditioned to successful postselection events. Here we derive an optimal postselection procedure for estimating the coupling constant between system and meter and show that it leads both to weak-value amplification and to the saturation of the quantum Fisher information, under conditions fulfilled by all previously reported experiments on the amplification of weak signals. For most of the preselected states, full information on the coupling constant can be extracted from the meter data set alone, while for a small fraction of the space of preselected states, it must be obtained from the postselection statistics.
Dokoumetzidis, Aristides; Aarons, Leon
2005-08-01
We investigated the propagation of population pharmacokinetic information across clinical studies by applying Bayesian techniques. The aim was to summarize the population pharmacokinetic estimates of a study in appropriate statistical distributions in order to use them as Bayesian priors in consequent population pharmacokinetic analyses. Various data sets of simulated and real clinical data were fitted with WinBUGS, with and without informative priors. The posterior estimates of fittings with non-informative priors were used to build parametric informative priors and the whole procedure was carried on in a consecutive manner. The posterior distributions of the fittings with informative priors where compared to those of the meta-analysis fittings of the respective combinations of data sets. Good agreement was found, for the simulated and experimental datasets when the populations were exchangeable, with the posterior distribution from the fittings with the prior to be nearly identical to the ones estimated with meta-analysis. However, when populations were not exchangeble an alternative parametric form for the prior, the natural conjugate prior, had to be used in order to have consistent results. In conclusion, the results of a population pharmacokinetic analysis may be summarized in Bayesian prior distributions that can be used consecutively with other analyses. The procedure is an alternative to meta-analysis and gives comparable results. It has the advantage that it is faster than the meta-analysis, due to the large datasets used with the latter and can be performed when the data included in the prior are not actually available.
Estimating Bayesian Phylogenetic Information Content
Lewis, Paul O.; Chen, Ming-Hui; Kuo, Lynn; Lewis, Louise A.; Fučíková, Karolina; Neupane, Suman; Wang, Yu-Bo; Shi, Daoyuan
2016-01-01
Measuring the phylogenetic information content of data has a long history in systematics. Here we explore a Bayesian approach to information content estimation. The entropy of the posterior distribution compared with the entropy of the prior distribution provides a natural way to measure information content. If the data have no information relevant to ranking tree topologies beyond the information supplied by the prior, the posterior and prior will be identical. Information in data discourages consideration of some hypotheses allowed by the prior, resulting in a posterior distribution that is more concentrated (has lower entropy) than the prior. We focus on measuring information about tree topology using marginal posterior distributions of tree topologies. We show that both the accuracy and the computational efficiency of topological information content estimation improve with use of the conditional clade distribution, which also allows topological information content to be partitioned by clade. We explore two important applications of our method: providing a compelling definition of saturation and detecting conflict among data partitions that can negatively affect analyses of concatenated data. [Bayesian; concatenation; conditional clade distribution; entropy; information; phylogenetics; saturation.] PMID:27155008
Teacher-Student Relationship at University: An Important yet Under-Researched Field
ERIC Educational Resources Information Center
Hagenauer, Gerda; Volet, Simone E.
2014-01-01
This article reviews the extant research on the relationship between students and teachers in higher education across three main areas: the quality of this relationship, its consequences and its antecedents. The weaknesses and gaps in prior research are highlighted and the importance of addressing the multi-dimensional and context-bound nature of…
Making the Most of Small Effects
ERIC Educational Resources Information Center
Thompson, Ross A.
2009-01-01
The idea that classroom social ecologies are shaped by the aggregate effects of peers' prior care experiences is provocative, even though the evidence is weak that this explains the small and diminishing effect of childcare experience in the National Institute of Child Health and Human Development study. Small effects may indeed be small effects,…
A Randomized Crossover Study of Web-Based Media Literacy to Prevent Smoking
ERIC Educational Resources Information Center
Shensa, Ariel; Phelps-Tschang, Jane; Miller, Elizabeth; Primack, Brian A.
2016-01-01
Feasibly implemented Web-based smoking media literacy (SML) programs have been associated with improving SML skills among adolescents. However, prior evaluations have generally had weak experimental designs. We aimed to examine program efficacy using a more rigorous crossover design. Seventy-two ninth grade students completed a Web-based SML…
Evaluating a Proposed Learning Experience in Terms of Eight Learning Theories.
ERIC Educational Resources Information Center
Abram, Marie J.
The work of eight learning theorists was used to evaluate a proposed adult education/learning experience in an effort to operationalize a system for locating strengths and weaknesses in an instructional system prior to its implementation. Thirty-five implications for adult education were extrapolated from work representing the Behaviorist (B.F.…
Line Assignments and Position Measurements in Several Weak CO2 Bands between 4590 /cm and 7930/ cm
NASA Technical Reports Server (NTRS)
Giver, L. P.; Kshirsagar, R. J.; Freedman, R. C.; Chackerian, C.; Wattson, R. B.
1998-01-01
A substantial set of CO2 spectra from 4500 to 12000 /cm has been obtained at Ames with 1500 m path length using a Bomem DA8 FTS. The signal/noise was improved compared to prior spectra obtained in this laboratory by including a filter wheel limiting the band-pass of each spectrum to several hundred/cm. We have measured positions of lines in several weak bands not previously resolved in laboratory spectra. Using our positions and assignments of lines of the Q branch of the 31103-00001 vibrational band at 4591/cm, we have re-determined the rotational constants for the 31103f levels. Q-branch lines of this band were previously observed, but misassigned, in Venus spectra by Mandin. The current HITRAN values of the rotational constants for this level are incorrect due to the Q-branch misassignments. Our prior measurements of the 21122-00001 vibrational band at 7901/cm were limited to Q- and R-branch lines; with the improved signal/noise of these new spectra we have now measured lines in the weaker P branch.
Bayesian generalized linear mixed modeling of Tuberculosis using informative priors
Woldegerima, Woldegebriel Assefa
2017-01-01
TB is rated as one of the world’s deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014. PMID:28257437
Low Titers of Canine Distemper Virus Antibody in Wild Fishers (Martes pennanti) in the Eastern USA.
Peper, Steven T; Peper, Randall L; Mitcheltree, Denise H; Kollias, George V; Brooks, Robert P; Stevens, Sadie S; Serfass, Thomas L
2016-01-01
Canine distemper virus (CDV) infects species in the order Carnivora. Members of the family Mustelidae are among the species most susceptible to CDV and have a high mortality rate after infection. Assessing an animal's pathogen or disease load prior to any reintroduction project is important to help protect the animal being reintroduced, as well as the wildlife and livestock in the area of relocation. We screened 58 fishers for CDV antibody prior to their release into Pennsylvania, US, as part of a reintroduction program. Five of the 58 (9%) fishers had a weak-positive reaction for CDV antibody at a dilution of 1:16. None of the fishers exhibited any clinical sign of canine distemper while being held prior to release.
Rugg, Michael D.
2016-01-01
Memory reactivation—the reinstatement of processes and representations engaged when an event is initially experienced—is believed to play an important role in strengthening and updating episodic memory. The present study examines how memory reactivation during a potentially interfering event influences memory for a previously experienced event. Participants underwent fMRI during the encoding phase of an AB/AC interference task in which some words were presented twice in association with two different encoding tasks (AB and AC trials) and other words were presented once (DE trials). The later memory test required retrieval of the encoding tasks associated with each of the study words. Retroactive interference was evident for the AB encoding task and was particularly strong when the AC encoding task was remembered rather than forgotten. We used multivariate classification and pattern similarity analysis (PSA) to measure reactivation of the AB encoding task during AC trials. The results demonstrated that reactivation of generic task information measured with multivariate classification predicted subsequent memory for the AB encoding task regardless of whether interference was strong and weak (trials for which the AC encoding task was remembered or forgotten, respectively). In contrast, reactivation of neural patterns idiosyncratic to a given AB trial measured with PSA only predicted memory when the strength of interference was low. These results suggest that reactivation of features of an initial experience shared across numerous events in the same category, but not features idiosyncratic to a particular event, are important in resisting retroactive interference caused by new learning. SIGNIFICANCE STATEMENT Reactivating a previously encoded memory is believed to provide an opportunity to strengthen the memory, but also to return the memory to a labile state, making it susceptible to interference. However, there is debate as to how memory reactivation elicited by a potentially interfering event influences subsequent retrieval of the memory. The findings of the current study indicate that reactivating features idiosyncratic to a particular experience during interference only influences subsequent memory when interference is relatively weak. Critically, reactivation of generic contextual information predicts subsequent source memory when retroactive interference is either strong and weak. The results indicate that reactivation of generic information about a prior episode mitigates forgetting due to retroactive interference. PMID:27076433
Subjective wellbeing, suicide and socioeconomic factors: an ecological analysis in Hong Kong.
Hsu, C-Y; Chang, S-S; Yip, P S F
2018-04-10
There has recently been an increased interest in mental health indicators for the monitoring of population wellbeing, which is among the targets of Sustainable Development Goals adopted by the United Nations. Levels of subjective wellbeing and suicide rates have been proposed as indicators of population mental health, but prior research is limited. Data on individual happiness and life satisfaction were sourced from a population-based survey in Hong Kong (2011). Suicide data were extracted from Coroner's Court files (2005-2013). Area characteristic variables included local poverty rate and four factors derived from a factor analysis of 21 variables extracted from the 2011 census. The associations between mean happiness and life satisfaction scores and suicide rates were assessed using Pearson correlation coefficient at two area levels: 18 districts and 30 quantiles of large street blocks (LSBs; n = 1620). LSB is a small area unit with a higher level of within-unit homogeneity compared with districts. Partial correlations were used to control for area characteristics. Happiness and life satisfaction demonstrated weak inverse associations with suicide rate at the district level (r = -0.32 and -0.36, respectively) but very strong associations at the LSB quantile level (r = -0.83 and -0.84, respectively). There were generally very weak or weak negative correlations across sex/age groups at the district level but generally moderate to strong correlations at the LSB quantile level. The associations were markedly attenuated or became null after controlling for area characteristics. Subjective wellbeing is strongly associated with suicide at a small area level; socioeconomic factors can largely explain this association. Socioeconomic factors could play an important role in determining the wellbeing of the population, and this could inform policies aimed at enhancing population wellbeing.
Variable Selection with Prior Information for Generalized Linear Models via the Prior LASSO Method.
Jiang, Yuan; He, Yunxiao; Zhang, Heping
LASSO is a popular statistical tool often used in conjunction with generalized linear models that can simultaneously select variables and estimate parameters. When there are many variables of interest, as in current biological and biomedical studies, the power of LASSO can be limited. Fortunately, so much biological and biomedical data have been collected and they may contain useful information about the importance of certain variables. This paper proposes an extension of LASSO, namely, prior LASSO (pLASSO), to incorporate that prior information into penalized generalized linear models. The goal is achieved by adding in the LASSO criterion function an additional measure of the discrepancy between the prior information and the model. For linear regression, the whole solution path of the pLASSO estimator can be found with a procedure similar to the Least Angle Regression (LARS). Asymptotic theories and simulation results show that pLASSO provides significant improvement over LASSO when the prior information is relatively accurate. When the prior information is less reliable, pLASSO shows great robustness to the misspecification. We illustrate the application of pLASSO using a real data set from a genome-wide association study.
Stoll, Kathrin; Hauck, Yvonne; Downe, Soo; Edmonds, Joyce; Gross, Mechthild M; Malott, Anne; McNiven, Patricia; Swift, Emma; Thomson, Gillian; Hall, Wendy A
2016-06-01
Assessment of childbirth fear, in advance of pregnancy, and early identification of modifiable factors contributing to fear can inform public health initiatives and/or school-based educational programming for the next generation of maternity care consumers. We developed and evaluated a short fear of birth scale that incorporates the most common dimensions of fear reported by men and women prior to pregnancy, fear of: labour pain, being out of control and unable to cope with labour and birth, complications, and irreversible physical damage. University students in six countries (Australia, Canada, England, Germany, Iceland, and the United States, n = 2240) participated in an online survey to assess their fears and attitudes about birth. We report internal consistency reliability, corrected-item-to-total correlations, factor loadings and convergent and discriminant validity of the new scale. The Childbirth Fear - Prior to Pregnancy (CFPP) scale showed high internal consistency across samples (α > 0.86). All corrected-item-to total correlations exceeded 0.45, supporting the uni-dimensionality of the scale. Construct validity of the CFPP was supported by a high correlation between the new scale and a two-item visual analogue scale that measures fear of birth (r > 0.6 across samples). Weak correlations of the CFPP with scores on measures that assess related psychological states (anxiety, depression and stress) support the discriminant validity of the scale. The CFPP is a short, reliable and valid measure of childbirth fear among young women and men in six countries who plan to have children. Copyright © 2016 Elsevier B.V. All rights reserved.
Role of analgesics, sedatives, neuromuscular blockers, and delirium.
Hall, Jesse B; Schweickert, William; Kress, John P
2009-10-01
A major focus on critical care medicine concerns the institution of life-support therapies, such as mechanical ventilation, during periods of organ failure to permit a window of opportunity to diagnose and treat underlying disorders so that patients may be returned to their prior functional status upon recovery. With the growing success of these intensive care unit-based therapies and longer-term follow-up of patients, severe weakness involving the peripheral nervous system and muscles has been identified in many recovering patients, often confounding the time course or magnitude of recovery. Mechanical ventilation is often accompanied by pharmacologic treatments including analgesics, sedatives, and neuromuscular blockers. These drugs and the encephalopathies accompanying some forms of critical illness result in a high prevalence of delirium in mechanically ventilated patients. These drug effects likely contribute to an impaired ability to assess the magnitude of intensive care unit-acquired weakness, to additional time spent immobilized and mechanically ventilated, and to additional weakness from the patient's relative immobility and bedridden state. This review surveys recent literature documenting these relationships and identifying approaches to minimize pharmacologic contributions to intensive care unit-acquired weakness.
40 CFR 60.2953 - What information must I submit prior to initial startup?
Code of Federal Regulations, 2013 CFR
2013-07-01
... initial startup? 60.2953 Section 60.2953 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... and Reporting § 60.2953 What information must I submit prior to initial startup? You must submit the information specified in paragraphs (a) through (e) of this section prior to initial startup. (a) The type(s...
40 CFR 60.2195 - What information must I submit prior to initial startup?
Code of Federal Regulations, 2014 CFR
2014-07-01
... initial startup? 60.2195 Section 60.2195 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... What information must I submit prior to initial startup? You must submit the information specified in paragraphs (a) through (e) of this section prior to initial startup. (a) The type(s) of waste to be burned...
40 CFR 60.2953 - What information must I submit prior to initial startup?
Code of Federal Regulations, 2014 CFR
2014-07-01
... initial startup? 60.2953 Section 60.2953 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... and Reporting § 60.2953 What information must I submit prior to initial startup? You must submit the information specified in paragraphs (a) through (e) of this section prior to initial startup. (a) The type(s...
40 CFR 60.2195 - What information must I submit prior to initial startup?
Code of Federal Regulations, 2013 CFR
2013-07-01
... initial startup? 60.2195 Section 60.2195 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... What information must I submit prior to initial startup? You must submit the information specified in paragraphs (a) through (e) of this section prior to initial startup. (a) The type(s) of waste to be burned...
Yi, Xiaofeng; Zhang, Jian; Fan, Tiehu; Tian, Baofeng; Jiang, Chuandong
2018-03-13
Magnetic resonance sounding (MRS) is a novel geophysical method to detect groundwater directly. By applying this method to underground projects in mines and tunnels, warning information can be provided on water bodies that are hidden in front prior to excavation and thus reduce the risk of casualties and accidents. However, unlike its application to ground surfaces, the application of MRS to underground environments is constrained by the narrow space, quite weak MRS signal, and complex electromagnetic interferences with high intensities in mines. Focusing on the special requirements of underground MRS (UMRS) detection, this study proposes the use of an antenna with different turn numbers, which employs a separated transmitter and receiver. We designed a stationary coil with stable performance parameters and with a side length of 2 m, a matching circuit based on a Q-switch and a multi-stage broad/narrowband mixed filter that can cancel out most electromagnetic noise. In addition, noises in the pass-band are further eliminated by adopting statistical criteria and harmonic modeling and stacking, all of which together allow weak UMRS signals to be reliably detected. Finally, we conducted a field case study of the UMRS measurement in the Wujiagou Mine in Shanxi Province, China, with known water bodies. Our results show that the method proposed in this study can be used to obtain UMRS signals in narrow mine environments, and the inverted hydrological information generally agrees with the actual situation. Thus, we conclude that the UMRS method proposed in this study can be used for predicting hazardous water bodies at a distance of 7-9 m in front of the wall for underground mining projects.
Yi, Xiaofeng; Fan, Tiehu; Tian, Baofeng
2018-01-01
Magnetic resonance sounding (MRS) is a novel geophysical method to detect groundwater directly. By applying this method to underground projects in mines and tunnels, warning information can be provided on water bodies that are hidden in front prior to excavation and thus reduce the risk of casualties and accidents. However, unlike its application to ground surfaces, the application of MRS to underground environments is constrained by the narrow space, quite weak MRS signal, and complex electromagnetic interferences with high intensities in mines. Focusing on the special requirements of underground MRS (UMRS) detection, this study proposes the use of an antenna with different turn numbers, which employs a separated transmitter and receiver. We designed a stationary coil with stable performance parameters and with a side length of 2 m, a matching circuit based on a Q-switch and a multi-stage broad/narrowband mixed filter that can cancel out most electromagnetic noise. In addition, noises in the pass-band are further eliminated by adopting statistical criteria and harmonic modeling and stacking, all of which together allow weak UMRS signals to be reliably detected. Finally, we conducted a field case study of the UMRS measurement in the Wujiagou Mine in Shanxi Province, China, with known water bodies. Our results show that the method proposed in this study can be used to obtain UMRS signals in narrow mine environments, and the inverted hydrological information generally agrees with the actual situation. Thus, we conclude that the UMRS method proposed in this study can be used for predicting hazardous water bodies at a distance of 7–9 m in front of the wall for underground mining projects. PMID:29534007
Miller, Robert H; Bovbjerg, Randall R
2002-06-01
Medical care should be safer. Inpatient problems and solutions have received the most attention; this outpatient qualitative case study addresses a gap in knowledge. We describe safety improvements among large physician groups, model the key influences on their behavior, and identify beneficial public and private policies. All groups were trying to reduce medical injury, which was part of the sample design. The most commonly targeted problems are those that are similar across groups: shortcomings in diagnosis, abnormal tests follow-up, scope of practice and referral patterns, and continuity of care. Medical group innovators vary greatly, however, in implementation of improvements, that is, in the extent to which they implement process changes that identify events/problems, analyze and track incidents, decide how to change clinical and administrative practices, and monitor impacts of the changes. Our conceptual model identifies key determinants: (1) demand for safety comes from external factors: legal, market, and professional; (2) organizational responses depend on internal factors: group size, scope, and integration; leadership and governance; professional culture; information-system assets; and financial and intellectual capital. Further, safety is an aspect of quality (the same tools, decision making, interventions, and monitoring apply), and safety management benefits from prior efficiency management (similar skills and culture of innovation). Observed variation in even simple safeguards shows that existing safety incentives are too weak. Our model suggests that the biggest improvement would come from boosting the demand for quality and safety from both private and public larger group purchasers. Current policy relies too much on litigation and discipline, which have sometimes helped, but not solved, problems because they are inefficient, tend to drive needed information underground, and complicate needed cultural change. Patients' safety demand is also weak for want of information and market power. Big purchasers' demands, however, quickly influence the internal environment of medical groups, helping managers advance quality safety toward the top of groups' congested decision-making "queues."
Pedroza, Claudia; Han, Weilu; Thanh Truong, Van Thi; Green, Charles; Tyson, Jon E
2018-01-01
One of the main advantages of Bayesian analyses of clinical trials is their ability to formally incorporate skepticism about large treatment effects through the use of informative priors. We conducted a simulation study to assess the performance of informative normal, Student- t, and beta distributions in estimating relative risk (RR) or odds ratio (OR) for binary outcomes. Simulation scenarios varied the prior standard deviation (SD; level of skepticism of large treatment effects), outcome rate in the control group, true treatment effect, and sample size. We compared the priors with regards to bias, mean squared error (MSE), and coverage of 95% credible intervals. Simulation results show that the prior SD influenced the posterior to a greater degree than the particular distributional form of the prior. For RR, priors with a 95% interval of 0.50-2.0 performed well in terms of bias, MSE, and coverage under most scenarios. For OR, priors with a wider 95% interval of 0.23-4.35 had good performance. We recommend the use of informative priors that exclude implausibly large treatment effects in analyses of clinical trials, particularly for major outcomes such as mortality.
Wan Salwina, Wan Ismail; Baharudin, Azlin; Nik Ruzyanei, Nik Jaafar; Midin, Marhani; Rahman, Fairuz Nazri Abdul
2013-12-01
Attention Deficit Hyperactivity Disorder (ADHD) is a clinical diagnosis relying on persistence of symptoms across different settings. Information are gathered from different informants including adolescents, parents and teachers. In this cross-sectional study involving 410 twelve-year old adolescents, 37 teachers and 367 parents from seven schools in the Federal Territory of Kuala Lumpur, reliability of ADHD symptoms among the various informants were reported. ADHD symptoms (i.e. predominantly hyperactive, predominantly inattentive and combined symptoms) were assessed by adolescents, teachers and parents, using Conners-Wells' Adolescent Self-report Scale (CASS), Conner's Teachers Rating Scale (CTRS) and Conner's Parents Rating Scale (CPRS) respectively. For predominantly hyperactive symptoms, there were statistically significant, weak positive correlations between parents and teachers reporting (r=0.241, p<0.01). Statistically significant, weak positive correlations were found between adolescents and parents for predominantly inattentive symptoms (r=0.283, p<0.01). Correlations between adolescents and parents reporting were statistically significant but weak (r=0.294, p<0.01). Weak correlations exist between the different informants reporting ADHD symptoms among Malaysian adolescents. While multiple informant ratings are required to facilitate the diagnosis of ADHD, effort should be taken to minimize the disagreement in reporting and better utilize the information. Copyright © 2013 Elsevier B.V. All rights reserved.
Farid, Ahmed; Abdel-Aty, Mohamed; Lee, Jaeyoung; Eluru, Naveen
2017-09-01
Safety performance functions (SPFs) are essential tools for highway agencies to predict crashes, identify hotspots and assess safety countermeasures. In the Highway Safety Manual (HSM), a variety of SPFs are provided for different types of roadway facilities, crash types and severity levels. Agencies, lacking the necessary resources to develop own localized SPFs, may opt to apply the HSM's SPFs for their jurisdictions. Yet, municipalities that want to develop and maintain their regional SPFs might encounter the issue of the small sample bias. Bayesian inference is being conducted to address this issue by combining the current data with prior information to achieve reliable results. It follows that the essence of Bayesian statistics is the application of informative priors, obtained from other SPFs or experts' experiences. In this study, we investigate the applicability of informative priors for Bayesian negative binomial SPFs for rural divided multilane highway segments in Florida and California. An SPF with non-informative priors is developed for each state and its parameters' distributions are assigned to the other state's SPF as informative priors. The performances of SPFs are evaluated by applying each state's SPFs to the other state. The analysis is conducted for both total (KABCO) and severe (KAB) crashes. As per the results, applying one state's SPF with informative priors, which are the other state's SPF independent variable estimates, to the latter state's conditions yields better goodness of fit (GOF) values than applying the former state's SPF with non-informative priors to the conditions of the latter state. This is for both total and severe crash SPFs. Hence, for localities where it is not preferred to develop own localized SPFs and adopt SPFs from elsewhere to cut down on resources, application of informative priors is shown to facilitate the process. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.
Practical Weak-lensing Shear Measurement with Metacalibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheldon, Erin S.; Huff, Eric M.
2017-05-20
Metacalibration is a recently introduced method to accurately measure weak gravitational lensing shear using only the available imaging data, without need for prior information about galaxy properties or calibration from simulations. The method involves distorting the image with a small known shear, and calculating the response of a shear estimator to that applied shear. The method was shown to be accurate in moderate-sized simulations with galaxy images that had relatively high signal-to-noise ratios, and without significant selection effects. In this work we introduce a formalism to correct for both shear response and selection biases. We also observe that for imagesmore » with relatively low signal-to-noise ratios, the correlated noise that arises during the metacalibration process results in significant bias, for which we develop a simple empirical correction. To test this formalism, we created large image simulations based on both parametric models and real galaxy images, including tests with realistic point-spread functions. We varied the point-spread function ellipticity at the five-percent level. In each simulation we applied a small few-percent shear to the galaxy images. We introduced additional challenges that arise in real data, such as detection thresholds, stellar contamination, and missing data. We applied cuts on the measured galaxy properties to induce significant selection effects. Using our formalism, we recovered the input shear with an accuracy better than a part in a thousand in all cases.« less
ERIC Educational Resources Information Center
Milchus, Norman J.
The Wayne County Pre-Reading Program for Preventing Reading Failure is an individually, diagnostically prescribed, perceptual-cognitive-linguistic development program. The program utilizes the largest compilation of prescriptively coded, reading readiness materials to be assigned prior to and concurrent with first-year reading instruction. The…
Analysis of Errors Made by Students Solving Genetics Problems.
ERIC Educational Resources Information Center
Costello, Sandra Judith
The purpose of this study was to analyze the errors made by students solving genetics problems. A sample of 10 non-science undergraduate students was obtained from a private college in Northern New Jersey. The results support prior research in the area of genetics education and show that a weak understanding of the relationship of meiosis to…
Using Dirichlet Priors to Improve Model Parameter Plausibility
ERIC Educational Resources Information Center
Rai, Dovan; Gong, Yue; Beck, Joseph E.
2009-01-01
Student modeling is a widely used approach to make inference about a student's attributes like knowledge, learning, etc. If we wish to use these models to analyze and better understand student learning there are two problems. First, a model's ability to predict student performance is at best weakly related to the accuracy of any one of its…
ERIC Educational Resources Information Center
Beddoes, Kacey; Schimpf, Corey
2018-01-01
The role and influence of department heads on women in academia is understudied and weakly conceptualized. This article expounds on prior work, which identified limitations of department head literature, to put forth three problematic discourses that run through much of the department head research: the "discourse of fairness," the…
Bell to Bell: Measuring Classroom Time Usage
ERIC Educational Resources Information Center
Walkup, John R.; Farbman, David; McGaugh, Karen
2009-01-01
This article discusses research in classroom time usage and the benefits and weaknesses of prior research in this area. The article addresses in particular how to precisely measure the use of time in classrooms and how to address the issue of partial engagement, in which only a portion of the class is academically engaged. The article defines…
Scene Text Recognition using Similarity and a Lexicon with Sparse Belief Propagation
Weinman, Jerod J.; Learned-Miller, Erik; Hanson, Allen R.
2010-01-01
Scene text recognition (STR) is the recognition of text anywhere in the environment, such as signs and store fronts. Relative to document recognition, it is challenging because of font variability, minimal language context, and uncontrolled conditions. Much information available to solve this problem is frequently ignored or used sequentially. Similarity between character images is often overlooked as useful information. Because of language priors, a recognizer may assign different labels to identical characters. Directly comparing characters to each other, rather than only a model, helps ensure that similar instances receive the same label. Lexicons improve recognition accuracy but are used post hoc. We introduce a probabilistic model for STR that integrates similarity, language properties, and lexical decision. Inference is accelerated with sparse belief propagation, a bottom-up method for shortening messages by reducing the dependency between weakly supported hypotheses. By fusing information sources in one model, we eliminate unrecoverable errors that result from sequential processing, improving accuracy. In experimental results recognizing text from images of signs in outdoor scenes, incorporating similarity reduces character recognition error by 19%, the lexicon reduces word recognition error by 35%, and sparse belief propagation reduces the lexicon words considered by 99.9% with a 12X speedup and no loss in accuracy. PMID:19696446
The role of Bs-->Kπ in determining the weak phase /γ
NASA Astrophysics Data System (ADS)
Gronau, M.; Rosner, J. L.
2000-06-01
The decay rates for B0-->K+π-, B+-->K0π+, and the charge-conjugate processes were found to provide information on the weak phase γ≡Arg(Vub*) when the ratio r of weak tree and penguin amplitudes was taken from data on /B-->ππ or semileptonic /B-->π decays. We show here that the rates for Bs-->K-π+ and B¯s-->K+π- can provide the necessary information on r, and estimate the statistical accuracy of forthcoming measurements at the Fermilab Tevatron.
Information-reality complementarity in photonic weak measurements
NASA Astrophysics Data System (ADS)
Mancino, Luca; Sbroscia, Marco; Roccia, Emanuele; Gianani, Ilaria; Cimini, Valeria; Paternostro, Mauro; Barbieri, Marco
2018-06-01
The emergence of realistic properties is a key problem in understanding the quantum-to-classical transition. In this respect, measurements represent a way to interface quantum systems with the macroscopic world: these can be driven in the weak regime, where a reduced back-action can be imparted by choosing meter states able to extract different amounts of information. Here we explore the implications of such weak measurement for the variation of realistic properties of two-level quantum systems pre- and postmeasurement, and extend our investigations to the case of open systems implementing the measurements.
34 CFR 99.30 - Under what conditions is prior consent required to disclose information?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 1 2010-07-01 2010-07-01 false Under what conditions is prior consent required to disclose information? 99.30 Section 99.30 Education Office of the Secretary, Department of Education FAMILY... Information From Education Records? § 99.30 Under what conditions is prior consent required to disclose...
ERIC Educational Resources Information Center
Woloshyn, Vera E.; And Others
1994-01-01
Thirty-two factual statements, half consistent and half not consistent with subjects' prior knowledge, were processed by 140 sixth and seventh graders. Half were directed to use elaborative interrogation (using prior knowledge) to answer why each statement was true. Across all memory measures, elaborative interrogation subjects performed better…
A novel microfluidics-based method for probing weak protein-protein interactions.
Tan, Darren Cherng-wen; Wijaya, I Putu Mahendra; Andreasson-Ochsner, Mirjam; Vasina, Elena Nikolaevna; Nallani, Madhavan; Hunziker, Walter; Sinner, Eva-Kathrin
2012-08-07
We report the use of a novel microfluidics-based method to detect weak protein-protein interactions between membrane proteins. The tight junction protein, claudin-2, synthesised in vitro using a cell-free expression system in the presence of polymer vesicles as membrane scaffolds, was used as a model membrane protein. Individual claudin-2 molecules interact weakly, although the cumulative effect of these interactions is significant. This effect results in a transient decrease of average vesicle dispersivity and reduction in transport speed of claudin-2-functionalised vesicles. Polymer vesicles functionalised with claudin-2 were perfused through a microfluidic channel and the time taken to traverse a defined distance within the channel was measured. Functionalised vesicles took 1.19 to 1.69 times longer to traverse this distance than unfunctionalised ones. Coating the channel walls with protein A and incubating the vesicles with anti-claudin-2 antibodies prior to perfusion resulted in the functionalised vesicles taking 1.75 to 2.5 times longer to traverse this distance compared to the controls. The data show that our system is able to detect weak as well as strong protein-protein interactions. This system offers researchers a portable, easily operated and customizable platform for the study of weak protein-protein interactions, particularly between membrane proteins.
When generating answers benefits arithmetic skill: the importance of prior knowledge.
Rittle-Johnson, Bethany; Kmicikewycz, Alexander Oleksij
2008-09-01
People remember information better if they generate the information while studying rather than read the information. However, prior research has not investigated whether this generation effect extends to related but unstudied items and has not been conducted in classroom settings. We compared third graders' success on studied and unstudied multiplication problems after they spent a class period generating answers to problems or reading the answers from a calculator. The effect of condition interacted with prior knowledge. Students with low prior knowledge had higher accuracy in the generate condition, but as prior knowledge increased, the advantage of generating answers decreased. The benefits of generating answers may extend to unstudied items and to classroom settings, but only for learners with low prior knowledge.
Tree Biomass Estimation of Chinese fir (Cunninghamia lanceolata) Based on Bayesian Method
Zhang, Jianguo
2013-01-01
Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass. PMID:24278198
Tree biomass estimation of Chinese fir (Cunninghamia lanceolata) based on Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo
2013-01-01
Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation W = a(D2H)b was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass.
Yang, Jin; Lee, Joonyeol; Lisberger, Stephen G.
2012-01-01
Sensory-motor behavior results from a complex interaction of noisy sensory data with priors based on recent experience. By varying the stimulus form and contrast for the initiation of smooth pursuit eye movements in monkeys, we show that visual motion inputs compete with two independent priors: one prior biases eye speed toward zero; the other prior attracts eye direction according to the past several days’ history of target directions. The priors bias the speed and direction of the initiation of pursuit for the weak sensory data provided by the motion of a low-contrast sine wave grating. However, the priors have relatively little effect on pursuit speed and direction when the visual stimulus arises from the coherent motion of a high-contrast patch of dots. For any given stimulus form, the mean and variance of eye speed co-vary in the initiation of pursuit, as expected for signal-dependent noise. This relationship suggests that pursuit implements a trade-off between movement accuracy and variation, reducing both when the sensory signals are noisy. The tradeoff is implemented as a competition of sensory data and priors that follows the rules of Bayesian estimation. Computer simulations show that the priors can be understood as direction specific control of the strength of visual-motor transmission, and can be implemented in a neural-network model that makes testable predictions about the population response in the smooth eye movement region of the frontal eye fields. PMID:23223286
A comment on priors for Bayesian occupancy models.
Northrup, Joseph M; Gerber, Brian D
2018-01-01
Understanding patterns of species occurrence and the processes underlying these patterns is fundamental to the study of ecology. One of the more commonly used approaches to investigate species occurrence patterns is occupancy modeling, which can account for imperfect detection of a species during surveys. In recent years, there has been a proliferation of Bayesian modeling in ecology, which includes fitting Bayesian occupancy models. The Bayesian framework is appealing to ecologists for many reasons, including the ability to incorporate prior information through the specification of prior distributions on parameters. While ecologists almost exclusively intend to choose priors so that they are "uninformative" or "vague", such priors can easily be unintentionally highly informative. Here we report on how the specification of a "vague" normally distributed (i.e., Gaussian) prior on coefficients in Bayesian occupancy models can unintentionally influence parameter estimation. Using both simulated data and empirical examples, we illustrate how this issue likely compromises inference about species-habitat relationships. While the extent to which these informative priors influence inference depends on the data set, researchers fitting Bayesian occupancy models should conduct sensitivity analyses to ensure intended inference, or employ less commonly used priors that are less informative (e.g., logistic or t prior distributions). We provide suggestions for addressing this issue in occupancy studies, and an online tool for exploring this issue under different contexts.
SU-E-J-71: Spatially Preserving Prior Knowledge-Based Treatment Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, H; Xing, L
2015-06-15
Purpose: Prior knowledge-based treatment planning is impeded by the use of a single dose volume histogram (DVH) curve. Critical spatial information is lost from collapsing the dose distribution into a histogram. Even similar patients possess geometric variations that becomes inaccessible in the form of a single DVH. We propose a simple prior knowledge-based planning scheme that extracts features from prior dose distribution while still preserving the spatial information. Methods: A prior patient plan is not used as a mere starting point for a new patient but rather stopping criteria are constructed. Each structure from the prior patient is partitioned intomore » multiple shells. For instance, the PTV is partitioned into an inner, middle, and outer shell. Prior dose statistics are then extracted for each shell and translated into the appropriate Dmin and Dmax parameters for the new patient. Results: The partitioned dose information from a prior case has been applied onto 14 2-D prostate cases. Using prior case yielded final DVHs that was comparable to manual planning, even though the DVH for the prior case was different from the DVH for the 14 cases. Solely using a single DVH for the entire organ was also performed for comparison but showed a much poorer performance. Different ways of translating the prior dose statistics into parameters for the new patient was also tested. Conclusion: Prior knowledge-based treatment planning need to salvage the spatial information without transforming the patients on a voxel to voxel basis. An efficient balance between the anatomy and dose domain is gained through partitioning the organs into multiple shells. The use of prior knowledge not only serves as a starting point for a new case but the information extracted from the partitioned shells are also translated into stopping criteria for the optimization problem at hand.« less
Accommodating Uncertainty in Prior Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Picard, Richard Roy; Vander Wiel, Scott Alan
2017-01-19
A fundamental premise of Bayesian methodology is that a priori information is accurately summarized by a single, precisely de ned prior distribution. In many cases, especially involving informative priors, this premise is false, and the (mis)application of Bayes methods produces posterior quantities whose apparent precisions are highly misleading. We examine the implications of uncertainty in prior distributions, and present graphical methods for dealing with them.
Identification of subsurface structures using electromagnetic data and shape priors
NASA Astrophysics Data System (ADS)
Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond
2015-03-01
We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.
Nickel-Hydrogen Battery Fault Clearing at Low State of Charge
NASA Technical Reports Server (NTRS)
Lurie, C.
1997-01-01
Fault clearing currents were achieved and maintained at discharge rates from C/2 to C/3 at high and low states of charge. The fault clearing plateau voltage is strong function of: discharge current, and voltage-prior-to-the-fault-clearing-event and a weak function of state of charge. Voltage performance, for the range of conditions reported, is summarized.
ERIC Educational Resources Information Center
Hadjichambis, Andreas Ch.; Georgiou, Yiannis; Paraskeva-Hadjichambi, Demetra; Kyza, Eleni A.; Mappouras, Demetrios
2016-01-01
Despite the importance of understanding how the human reproductive system works, adolescents worldwide exhibit weak conceptual understanding, which leads to serious risks, such as unwanted pregnancies and sexually transmitted diseases. Studies focusing on the development and evaluation of inquiry-based learning interventions, promoting the…
[Quality of care indicators for benign prostatic hyperplasia. A qualitative study].
Navarro-Pérez, Jorge; Peiró, Salvador; Brotons-Muntó, Francisco; López-Alcina, Emilio; Real-Romaguera, Arcadio
2014-05-01
To assess quality of care indicators for benign prostatic hyperplasia (BPH), and to evaluate their strengths and weaknesses for incorporation into health information systems. Structured expert meeting, using procedures adapted from the nominal group techniques and the Rand consensus method. Valencian School of Health Studies. Forty panellists (74% doctors, 70% from primary care settings) with experience in the management of BPH from 15 departments of the Valencia Health Agency. Three workshops were held simultaneously (examination and diagnosis, drug therapy, and appropriateness and results), and the 15 quality indicators selected by the coordination group were assessed. Eleven of the 15 indicators scored in the range of high relevance. The 5 best rated were: the use of alpha-blockers + 5-alpha reductase inhibitor from certain severity level, digital rectal examination in the initial assessment, follow-up with the International Prostate Symptoms Score (IPSS), the rate of urgent catheterization in Hospital Accident & Emergency Units, initial assessment with the IPSS and the use of alpha-blockers prior to catheter removal for acute retention of urine. Some of the assessed indicators can be useful for incorporation into health information systems. Copyright © 2013 Elsevier España, S.L. All rights reserved.
Waddington, I; Roderick, M; Naik, R
2001-02-01
To examine the methods of appointment, experience, and qualifications of club doctors and physiotherapists in professional football. Semistructured tape recorded interviews with 12 club doctors, 10 club physiotherapists, and 27 current and former players. A questionnaire was also sent to 90 club doctors; 58 were returned. In almost all clubs, methods of appointment of doctors are informal and reflect poor employment practice: posts are rarely advertised and many doctors are appointed on the basis of personal contacts and without interview. Few club doctors had prior experience or qualifications in sports medicine and very few have a written job description. The club doctor is often not consulted about the appointment of the physiotherapist; physiotherapists are usually appointed informally, often without interview, and often by the manager without involving anyone who is qualified in medicine or physiotherapy. Half of all clubs do not have a qualified (chartered) physiotherapist; such unqualified physiotherapists are in a weak position to resist threats to their clinical autonomy, particularly those arising from managers' attempts to influence clinical decisions. Almost all aspects of the appointment of club doctors and physiotherapists need careful re-examination.
Spencer, Amy V; Cox, Angela; Lin, Wei-Yu; Easton, Douglas F; Michailidou, Kyriaki; Walters, Kevin
2016-04-01
There is a large amount of functional genetic data available, which can be used to inform fine-mapping association studies (in diseases with well-characterised disease pathways). Single nucleotide polymorphism (SNP) prioritization via Bayes factors is attractive because prior information can inform the effect size or the prior probability of causal association. This approach requires the specification of the effect size. If the information needed to estimate a priori the probability density for the effect sizes for causal SNPs in a genomic region isn't consistent or isn't available, then specifying a prior variance for the effect sizes is challenging. We propose both an empirical method to estimate this prior variance, and a coherent approach to using SNP-level functional data, to inform the prior probability of causal association. Through simulation we show that when ranking SNPs by our empirical Bayes factor in a fine-mapping study, the causal SNP rank is generally as high or higher than the rank using Bayes factors with other plausible values of the prior variance. Importantly, we also show that assigning SNP-specific prior probabilities of association based on expert prior functional knowledge of the disease mechanism can lead to improved causal SNPs ranks compared to ranking with identical prior probabilities of association. We demonstrate the use of our methods by applying the methods to the fine mapping of the CASP8 region of chromosome 2 using genotype data from the Collaborative Oncological Gene-Environment Study (COGS) Consortium. The data we analysed included approximately 46,000 breast cancer case and 43,000 healthy control samples. © 2016 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Backus, George E.
1999-01-01
The purpose of the grant was to study how prior information about the geomagnetic field can be used to interpret surface and satellite magnetic measurements, to generate quantitative descriptions of prior information that might be so used, and to use this prior information to obtain from satellite data a model of the core field with statistically justifiable error estimates. The need for prior information in geophysical inversion has long been recognized. Data sets are finite, and faithful descriptions of aspects of the earth almost always require infinite-dimensional model spaces. By themselves, the data can confine the correct earth model only to an infinite-dimensional subset of the model space. Earth properties other than direct functions of the observed data cannot be estimated from those data without prior information about the earth. Prior information is based on what the observer already knows before the data become available. Such information can be "hard" or "soft". Hard information is a belief that the real earth must lie in some known region of model space. For example, the total ohmic dissipation in the core is probably less that the total observed geothermal heat flow out of the earth's surface. (In principle, ohmic heat in the core can be recaptured to help drive the dynamo, but this effect is probably small.) "Soft" information is a probability distribution on the model space, a distribution that the observer accepts as a quantitative description of her/his beliefs about the earth. The probability distribution can be a subjective prior in the sense of Bayes or the objective result of a statistical study of previous data or relevant theories.
NASA Astrophysics Data System (ADS)
Hall, Mildred V.
Part I. Intensive courses have been shown to be associated with equal or greater student success than traditional-length courses in a wide variety of disciplines and education levels. Student records from intensive and traditional-length introductory general chemistry courses were analyzed to determine the effects, of the course format, the level of academic experience, life experience (age), GPA, academic major and gender on student success in the course. Pretest scores, GPA and ACT composite scores were used as measures of academic ability and prior knowledge; t-tests comparing the means of these variables were used to establish that the populations were comparable prior to the course. Final exam scores, total course points and pretest-posttest differences were used as measures of student success; t-tests were used to determine if differences existed between the populations. ANCOVA analyses revealed that student GPA, pretest scores and course format were the only variables tested that were significant in accounting for the variance of the academic success measures. In general, the results indicate that students achieved greater academic success in the intensive-format course, regardless of the level of academic experience, life experience, academic major or gender. Part II. Weakly coordinating anions have many important applications, one of which is to function as co-catalysts in the polymerization of olefins by zirconocene. The structure of tris(tetrachlorobenzenedialato) phosphate(V) or "trisphat" anion suggests that it might be an outstanding example of a weakly coordinating anion. Trisphat acid was synthesized and immediately used to prepare the stable tributylammonium trisphat, which was further reacted to produce trisphat salts of Group I metal cations in high yields. Results of the 35Cl NQR analysis of these trisphat salts indicate only very weak coordination between the metal cations and the chlorine atoms of the trisphat anion.
Non-Gaussian information from weak lensing data via deep learning
NASA Astrophysics Data System (ADS)
Gupta, Arushi; Matilla, José Manuel Zorrilla; Hsu, Daniel; Haiman, Zoltán
2018-05-01
Weak lensing maps contain information beyond two-point statistics on small scales. Much recent work has tried to extract this information through a range of different observables or via nonlinear transformations of the lensing field. Here we train and apply a two-dimensional convolutional neural network to simulated noiseless lensing maps covering 96 different cosmological models over a range of {Ωm,σ8} . Using the area of the confidence contour in the {Ωm,σ8} plane as a figure of merit, derived from simulated convergence maps smoothed on a scale of 1.0 arcmin, we show that the neural network yields ≈5 × tighter constraints than the power spectrum, and ≈4 × tighter than the lensing peaks. Such gains illustrate the extent to which weak lensing data encode cosmological information not accessible to the power spectrum or even other, non-Gaussian statistics such as lensing peaks.
Prefrontal Engagement during Source Memory Retrieval Depends on the Prior Encoding Task
Kuo, Trudy Y.; Van Petten, Cyma
2008-01-01
The prefrontal cortex is strongly engaged by some, but not all, episodic memory tests. Prior work has shown that source recognition tests—those that require memory for conjunctions of studied attributes—yield deficient performance in patients with prefrontal damage and greater prefrontal activity in healthy subjects, as compared to simple recognition tests. Here, we tested the hypothesis that there is no intrinsic relationship between the prefrontal cortex and source memory, but that the prefrontal cortex is engaged by the demand to retrieve weakly encoded relationships. Subjects attempted to remember object/color conjunctions after an encoding task that focused on object identity alone, and an integrative encoding task that encouraged attention to object/color relationships. After the integrative encoding task, the late prefrontal brain electrical activity that typically occurs in source memory tests was eliminated. Earlier brain electrical activity related to successful recognition of the objects was unaffected by the nature of prior encoding. PMID:16839287
Rice, Anne M; Mahling, Ryan; Fealey, Michael E; Rannikko, Anika; Dunleavy, Katie; Hendrickson, Troy; Lohese, K Jean; Kruggel, Spencer; Heiling, Hillary; Harren, Daniel; Sutton, R Bryan; Pastor, John; Hinderliter, Anne
2014-09-01
Eukaryotic lipids in a bilayer are dominated by weak cooperative interactions. These interactions impart highly dynamic and pliable properties to the membrane. C2 domain-containing proteins in the membrane also interact weakly and cooperatively giving rise to a high degree of conformational plasticity. We propose that this feature of weak energetics and plasticity shared by lipids and C2 domain-containing proteins enhance a cell's ability to transduce information across the membrane. We explored this hypothesis using information theory to assess the information storage capacity of model and mast cell membranes, as well as differential scanning calorimetry, carboxyfluorescein release assays, and tryptophan fluorescence to assess protein and membrane stability. The distribution of lipids in mast cell membranes encoded 5.6-5.8bits of information. More information resided in the acyl chains than the head groups and in the inner leaflet of the plasma membrane than the outer leaflet. When the lipid composition and information content of model membranes were varied, the associated C2 domains underwent large changes in stability and denaturation profile. The C2 domain-containing proteins are therefore acutely sensitive to the composition and information content of their associated lipids. Together, these findings suggest that the maximum flow of signaling information through the membrane and into the cell is optimized by the cooperation of near-random distributions of membrane lipids and proteins. This article is part of a Special Issue entitled: Interfacially Active Peptides and Proteins. Guest Editors: William C. Wimley and Kalina Hristova. Copyright © 2014 Elsevier B.V. All rights reserved.
A coupled electro-thermal Discontinuous Galerkin method
NASA Astrophysics Data System (ADS)
Homsi, L.; Geuzaine, C.; Noels, L.
2017-11-01
This paper presents a Discontinuous Galerkin scheme in order to solve the nonlinear elliptic partial differential equations of coupled electro-thermal problems. In this paper we discuss the fundamental equations for the transport of electricity and heat, in terms of macroscopic variables such as temperature and electric potential. A fully coupled nonlinear weak formulation for electro-thermal problems is developed based on continuum mechanics equations expressed in terms of energetically conjugated pair of fluxes and fields gradients. The weak form can thus be formulated as a Discontinuous Galerkin method. The existence and uniqueness of the weak form solution are proved. The numerical properties of the nonlinear elliptic problems i.e., consistency and stability, are demonstrated under specific conditions, i.e. use of high enough stabilization parameter and at least quadratic polynomial approximations. Moreover the prior error estimates in the H1-norm and in the L2-norm are shown to be optimal in the mesh size with the polynomial approximation degree.
Impact of theoretical priors in cosmological analyses: The case of single field quintessence
NASA Astrophysics Data System (ADS)
Peirone, Simone; Martinelli, Matteo; Raveri, Marco; Silvestri, Alessandra
2017-09-01
We investigate the impact of general conditions of theoretical stability and cosmological viability on dynamical dark energy models. As a powerful example, we study whether minimally coupled, single field quintessence models that are safe from ghost instabilities, can source the Chevallier-Polarski-Linder (CPL) expansion history recently shown to be mildly favored by a combination of cosmic microwave background (Planck) and weak lensing (KiDS) data. We find that in their most conservative form, the theoretical conditions impact the analysis in such a way that smooth single field quintessence becomes significantly disfavored with respect to the standard Λ CDM cosmological model. This is due to the fact that these conditions cut a significant portion of the (w0,wa) parameter space for CPL, in particular, eliminating the region that would be favored by weak lensing data. Within the scenario of a smooth dynamical dark energy parametrized with CPL, weak lensing data favors a region that would require multiple fields to ensure gravitational stability.
A comment on priors for Bayesian occupancy models
Gerber, Brian D.
2018-01-01
Understanding patterns of species occurrence and the processes underlying these patterns is fundamental to the study of ecology. One of the more commonly used approaches to investigate species occurrence patterns is occupancy modeling, which can account for imperfect detection of a species during surveys. In recent years, there has been a proliferation of Bayesian modeling in ecology, which includes fitting Bayesian occupancy models. The Bayesian framework is appealing to ecologists for many reasons, including the ability to incorporate prior information through the specification of prior distributions on parameters. While ecologists almost exclusively intend to choose priors so that they are “uninformative” or “vague”, such priors can easily be unintentionally highly informative. Here we report on how the specification of a “vague” normally distributed (i.e., Gaussian) prior on coefficients in Bayesian occupancy models can unintentionally influence parameter estimation. Using both simulated data and empirical examples, we illustrate how this issue likely compromises inference about species-habitat relationships. While the extent to which these informative priors influence inference depends on the data set, researchers fitting Bayesian occupancy models should conduct sensitivity analyses to ensure intended inference, or employ less commonly used priors that are less informative (e.g., logistic or t prior distributions). We provide suggestions for addressing this issue in occupancy studies, and an online tool for exploring this issue under different contexts. PMID:29481554
Xi, Jianing; Wang, Minghui; Li, Ao
2018-06-05
Discovery of mutated driver genes is one of the primary objective for studying tumorigenesis. To discover some relatively low frequently mutated driver genes from somatic mutation data, many existing methods incorporate interaction network as prior information. However, the prior information of mRNA expression patterns are not exploited by these existing network-based methods, which is also proven to be highly informative of cancer progressions. To incorporate prior information from both interaction network and mRNA expressions, we propose a robust and sparse co-regularized nonnegative matrix factorization to discover driver genes from mutation data. Furthermore, our framework also conducts Frobenius norm regularization to overcome overfitting issue. Sparsity-inducing penalty is employed to obtain sparse scores in gene representations, of which the top scored genes are selected as driver candidates. Evaluation experiments by known benchmarking genes indicate that the performance of our method benefits from the two type of prior information. Our method also outperforms the existing network-based methods, and detect some driver genes that are not predicted by the competing methods. In summary, our proposed method can improve the performance of driver gene discovery by effectively incorporating prior information from interaction network and mRNA expression patterns into a robust and sparse co-regularized matrix factorization framework.
The neural basis of belief updating and rational decision making
Achtziger, Anja; Hügelschäfer, Sabine; Steinhauser, Marco
2014-01-01
Rational decision making under uncertainty requires forming beliefs that integrate prior and new information through Bayes’ rule. Human decision makers typically deviate from Bayesian updating by either overweighting the prior (conservatism) or overweighting new information (e.g. the representativeness heuristic). We investigated these deviations through measurements of electrocortical activity in the human brain during incentivized probability-updating tasks and found evidence of extremely early commitment to boundedly rational heuristics. Participants who overweight new information display a lower sensibility to conflict detection, captured by an event-related potential (the N2) observed around 260 ms after the presentation of new information. Conservative decision makers (who overweight prior probabilities) make up their mind before new information is presented, as indicated by the lateralized readiness potential in the brain. That is, they do not inhibit the processing of new information but rather immediately rely on the prior for making a decision. PMID:22956673
The neural basis of belief updating and rational decision making.
Achtziger, Anja; Alós-Ferrer, Carlos; Hügelschäfer, Sabine; Steinhauser, Marco
2014-01-01
Rational decision making under uncertainty requires forming beliefs that integrate prior and new information through Bayes' rule. Human decision makers typically deviate from Bayesian updating by either overweighting the prior (conservatism) or overweighting new information (e.g. the representativeness heuristic). We investigated these deviations through measurements of electrocortical activity in the human brain during incentivized probability-updating tasks and found evidence of extremely early commitment to boundedly rational heuristics. Participants who overweight new information display a lower sensibility to conflict detection, captured by an event-related potential (the N2) observed around 260 ms after the presentation of new information. Conservative decision makers (who overweight prior probabilities) make up their mind before new information is presented, as indicated by the lateralized readiness potential in the brain. That is, they do not inhibit the processing of new information but rather immediately rely on the prior for making a decision.
Enhancing robustness of multiparty quantum correlations using weak measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Uttam, E-mail: uttamsingh@hri.res.in; Mishra, Utkarsh, E-mail: utkarsh@hri.res.in; Dhar, Himadri Shekhar, E-mail: dhar.himadri@gmail.com
Multipartite quantum correlations are important resources for the development of quantum information and computation protocols. However, the resourcefulness of multipartite quantum correlations in practical settings is limited by its fragility under decoherence due to environmental interactions. Though there exist protocols to protect bipartite entanglement under decoherence, the implementation of such protocols for multipartite quantum correlations has not been sufficiently explored. Here, we study the effect of local amplitude damping channel on the generalized Greenberger–Horne–Zeilinger state, and use a protocol of optimal reversal quantum weak measurement to protect the multipartite quantum correlations. We observe that the weak measurement reversal protocol enhancesmore » the robustness of multipartite quantum correlations. Further it increases the critical damping value that corresponds to entanglement sudden death. To emphasize the efficacy of the technique in protection of multipartite quantum correlation, we investigate two proximately related quantum communication tasks, namely, quantum teleportation in a one sender, many receivers setting and multiparty quantum information splitting, through a local amplitude damping channel. We observe an increase in the average fidelity of both the quantum communication tasks under the weak measurement reversal protocol. The method may prove beneficial, for combating external interactions, in other quantum information tasks using multipartite resources. - Highlights: • Extension of weak measurement reversal scheme to protect multiparty quantum correlations. • Protection of multiparty quantum correlation under local amplitude damping noise. • Enhanced fidelity of quantum teleportation in one sender and many receivers setting. • Enhanced fidelity of quantum information splitting protocol.« less
NASA Astrophysics Data System (ADS)
Zhao, Ming; Jia, Xiaodong
2017-09-01
Singular value decomposition (SVD), as an effective signal denoising tool, has been attracting considerable attention in recent years. The basic idea behind SVD denoising is to preserve the singular components (SCs) with significant singular values. However, it is shown that the singular values mainly reflect the energy of decomposed SCs, therefore traditional SVD denoising approaches are essentially energy-based, which tend to highlight the high-energy regular components in the measured signal, while ignoring the weak feature caused by early fault. To overcome this issue, a reweighted singular value decomposition (RSVD) strategy is proposed for signal denoising and weak feature enhancement. In this work, a novel information index called periodic modulation intensity is introduced to quantify the diagnostic information in a mechanical signal. With this index, the decomposed SCs can be evaluated and sorted according to their information levels, rather than energy. Based on that, a truncated linear weighting function is proposed to control the contribution of each SC in the reconstruction of the denoised signal. In this way, some weak but informative SCs could be highlighted effectively. The advantages of RSVD over traditional approaches are demonstrated by both simulated signals and real vibration/acoustic data from a two-stage gearbox as well as train bearings. The results demonstrate that the proposed method can successfully extract the weak fault feature even in the presence of heavy noise and ambient interferences.
Garrard, Georgia E; McCarthy, Michael A; Vesk, Peter A; Radford, James Q; Bennett, Andrew F
2012-01-01
1. Informative Bayesian priors can improve the precision of estimates in ecological studies or estimate parameters for which little or no information is available. While Bayesian analyses are becoming more popular in ecology, the use of strongly informative priors remains rare, perhaps because examples of informative priors are not readily available in the published literature. 2. Dispersal distance is an important ecological parameter, but is difficult to measure and estimates are scarce. General models that provide informative prior estimates of dispersal distances will therefore be valuable. 3. Using a world-wide data set on birds, we develop a predictive model of median natal dispersal distance that includes body mass, wingspan, sex and feeding guild. This model predicts median dispersal distance well when using the fitted data and an independent test data set, explaining up to 53% of the variation. 4. Using this model, we predict a priori estimates of median dispersal distance for 57 woodland-dependent bird species in northern Victoria, Australia. These estimates are then used to investigate the relationship between dispersal ability and vulnerability to landscape-scale changes in habitat cover and fragmentation. 5. We find evidence that woodland bird species with poor predicted dispersal ability are more vulnerable to habitat fragmentation than those species with longer predicted dispersal distances, thus improving the understanding of this important phenomenon. 6. The value of constructing informative priors from existing information is also demonstrated. When used as informative priors for four example species, predicted dispersal distances reduced the 95% credible intervals of posterior estimates of dispersal distance by 8-19%. Further, should we have wished to collect information on avian dispersal distances and relate it to species' responses to habitat loss and fragmentation, data from 221 individuals across 57 species would have been required to obtain estimates with the same precision as those provided by the general model. © 2011 The Authors. Journal of Animal Ecology © 2011 British Ecological Society.
Poor practice and knowledge among traditional birth attendants in Eastern Sudan.
Ali, A A; Siddig, M F
2012-11-01
To identify and understand knowledge and practice among traditional birth attendants (TBAs), a total of 111 TBAs were interviewed at Kassala, Eastern Sudan between March and April 2011. Hand-washing prior to the delivery was a universal practice but only 25.2% of the interviewed TBAs used sterilised equipment. TBAs in this study appeared to have a low level of awareness about when a mother should be referred to hospital, and lacked basic information on family planning and HIV/AIDS. None of these 111 TBAs knew or used equipment for neonatal resuscitation (such as bag, tube and mask) or knew neonatal signs that needed extra attention such as change in skin colour, weak suckling and respiratory distress, and nearly one-third (28.8%) of the respondents believed in a few days delay in milk production. Thus, substantial effort is needed to improve the knowledge and practice among TBAs in Eastern Sudan, including training programmes, and this might be the best hope to achieve the Millennium Development Goals.
Tukiendorf, Andrzej; Mansournia, Mohammad Ali; Wydmański, Jerzy; Wolny-Rokicka, Edyta
2017-04-01
Background: Clinical datasets for epithelial ovarian cancer brain metastatic patients are usually small in size. When adequate case numbers are lacking, resulting estimates of regression coefficients may demonstrate bias. One of the direct approaches to reduce such sparse-data bias is based on penalized estimation. Methods: A re- analysis of formerly reported hazard ratios in diagnosed patients was performed using penalized Cox regression with a popular SAS package providing additional software codes for a statistical computational procedure. Results: It was found that the penalized approach can readily diminish sparse data artefacts and radically reduce the magnitude of estimated regression coefficients. Conclusions: It was confirmed that classical statistical approaches may exaggerate regression estimates or distort study interpretations and conclusions. The results support the thesis that penalization via weak informative priors and data augmentation are the safest approaches to shrink sparse data artefacts frequently occurring in epidemiological research. Creative Commons Attribution License
McGowan, Catherine R; Harris, Magdalena; Platt, Lucy; Hope, Vivian; Rhodes, Tim
2018-05-11
Since 2013, North America has experienced a sharp increase in unintentional fatal overdoses: fentanyl, and its analogues, are believed to be primarily responsible. Currently, the most practical means for people who use drugs (PWUD) to avoid or mitigate risk of fentanyl-related overdose is to use drugs in the presence of someone who is in possession of, and experienced using, naloxone. Self-test strips which detect fentanyl, and some of its analogues, have been developed for off-label use allowing PWUD to test their drugs prior to consumption. We review the evidence on the off-label sensitivity and specificity of fentanyl test strips, and query whether the accuracy of fentanyl test strips might be mediated according to situated practices of use. We draw attention to the weak research evidence informing the use of fentanyl self-testing strips. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Understanding Graduate Teaching Assistants as Tutorial Instructors
NASA Astrophysics Data System (ADS)
Scherr, Rachel E.; Elby, A.
2006-12-01
Physics graduate teaching assistants are essential to the implementation of many collaborative active-learning environments, including tutorials. However, many TAs have trouble teaching effectively in these formats. Anecdotal evidence suggests that the problems may include inappropriate models of physics students, unproductive theories of learning, lack of experience with modern pedagogical methods, and weaknesses in understanding basic physics topics. A new research project at the University of Maryland is investigating the specific nature of TAs' experience with reform instruction using in-depth studies of TAs in course preparation sessions, in the tutorial classroom, in a weekly teaching seminar, and in reflective interviews. We find that all TAs studied recognize the insufficiency of traditional instruction to at least some extent, citing as evidence their own learning experiences, prior teaching experiences, and exposure to FCI-type data. We also observe great variability in views of the nature of physics knowledge and learning (both professed and enacted). These results are informing the development of the professional development program for physics teaching assistants at the University of Maryland.
Image Fusion During Vascular and Nonvascular Image-Guided Procedures☆
Abi-Jaoudeh, Nadine; Kobeiter, Hicham; Xu, Sheng; Wood, Bradford J.
2013-01-01
Image fusion may be useful in any procedure where previous imaging such as positron emission tomography, magnetic resonance imaging, or contrast-enhanced computed tomography (CT) defines information that is referenced to the procedural imaging, to the needle or catheter, or to an ultrasound transducer. Fusion of prior and intraoperative imaging provides real-time feedback on tumor location or margin, metabolic activity, device location, or vessel location. Multimodality image fusion in interventional radiology was initially introduced for biopsies and ablations, especially for lesions only seen on arterial phase CT, magnetic resonance imaging, or positron emission tomography/CT but has more recently been applied to other vascular and nonvascular procedures. Two different types of platforms are commonly used for image fusion and navigation: (1) electromagnetic tracking and (2) cone-beam CT. Both technologies would be reviewed as well as their strengths and weaknesses, indications, when to use one vs the other, tips and guidance to streamline use, and early evidence defining clinical benefits of these rapidly evolving, commercially available and emerging techniques. PMID:23993079
Adipose Gene Expression Prior to Weight Loss Can Differentiate and Weakly Predict Dietary Responders
Mutch, David M.; Temanni, M. Ramzi; Henegar, Corneliu; Combes, Florence; Pelloux, Véronique; Holst, Claus; Sørensen, Thorkild I. A.; Astrup, Arne; Martinez, J. Alfredo; Saris, Wim H. M.; Viguerie, Nathalie; Langin, Dominique; Zucker, Jean-Daniel; Clément, Karine
2007-01-01
Background The ability to identify obese individuals who will successfully lose weight in response to dietary intervention will revolutionize disease management. Therefore, we asked whether it is possible to identify subjects who will lose weight during dietary intervention using only a single gene expression snapshot. Methodology/Principal Findings The present study involved 54 female subjects from the Nutrient-Gene Interactions in Human Obesity-Implications for Dietary Guidelines (NUGENOB) trial to determine whether subcutaneous adipose tissue gene expression could be used to predict weight loss prior to the 10-week consumption of a low-fat hypocaloric diet. Using several statistical tests revealed that the gene expression profiles of responders (8–12 kgs weight loss) could always be differentiated from non-responders (<4 kgs weight loss). We also assessed whether this differentiation was sufficient for prediction. Using a bottom-up (i.e. black-box) approach, standard class prediction algorithms were able to predict dietary responders with up to 61.1%±8.1% accuracy. Using a top-down approach (i.e. using differentially expressed genes to build a classifier) improved prediction accuracy to 80.9%±2.2%. Conclusion Adipose gene expression profiling prior to the consumption of a low-fat diet is able to differentiate responders from non-responders as well as serve as a weak predictor of subjects destined to lose weight. While the degree of prediction accuracy currently achieved with a gene expression snapshot is perhaps insufficient for clinical use, this work reveals that the comprehensive molecular signature of adipose tissue paves the way for the future of personalized nutrition. PMID:18094752
Content Analysis of Standardized-Patients' Descriptive Feedback on Student Performance on the CPX.
Lee, Young Hee; Lee, Young-Mee; Kim, Byung Soo
2010-12-01
The goal of this study was to explore what kind of additional information is provided by the descriptive comments other than the rating scales, on the physician-patient interaction (PPI) in the clinical performance examination (CPX) and its feedback role in identifying students' strengths and weaknesses in communication skills. The data were collected from 18 medical schools in Seoul and Gyeonggi region, which participated in the CPX for fourth-year medical students in 2006 and 2007. In total 12,650 examination cases in 2006 and 12,814 cases in 2007 were analyzed. Descriptive comments from the standardized patients (SPs) were analyzed by content analysis, which includes a 4-step process: coding, conceptualizing, categorizing and explanation. Ten categories (41 concepts) for 'strength' and 11 for 'weakness' (40 concepts) in the PPI were extracted. Among them, 10 categories were the same in both strength and weakness: providing adequate interview atmosphere, attentive listening, providing emotional support, non-verbal behaviors, professional attitude, questioning, explanation, reaching agreement, counseling & education and conducting adequate physical examination. For the 'structured and organized interview', only weakness was described. In 'providing emotional support' and 'adequate interview atmosphere', comments on strengths were more frequently mentioned than weaknesses. However, communication skills that were related to non-verbal behaviors were more frequently considered weaknesses rather than strengths. The numbers and content of the SP's comments on students' strengths and weaknesses in the PPI varied depending on the case specificities. The results suggest that the SPs' descriptive comments on student' performance on the CPX can provide additional information versus structured quantitative assessment tools such as performance checklists and rating scales. In particular, this information can be used as valuable feedback to identify the advantages and dicadvantages of the PPI and to enhance students' communication skills.
Multi-object segmentation using coupled nonparametric shape and relative pose priors
NASA Astrophysics Data System (ADS)
Uzunbas, Mustafa Gökhan; Soldea, Octavian; Çetin, Müjdat; Ünal, Gözde; Erçil, Aytül; Unay, Devrim; Ekin, Ahmet; Firat, Zeynep
2009-02-01
We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our method is motivated by the observation that neighboring or coupling objects in images generate configurations and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted objects in a number of applications. In particular for medical image analysis, we use our method to extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging segmentation problem. We also apply our technique to the problem of handwritten character segmentation. Finally, we use our method to segment cars in urban scenes.
Health research capacity building in Georgia: a case-based needs assessment.
Squires, A; Chitashvili, T; Djibuti, M; Ridge, L; Chyun, D
2017-06-01
Research capacity building in the health sciences in low- and middle-income countries (LMICs) has typically focused on bench-science capacity, but research examining health service delivery and health workforce is equally necessary to determine the best ways to deliver care. The Republic of Georgia, formerly a part of the Soviet Union, has multiple issues within its healthcare system that would benefit from expended research capacity, but the current research environment needs to be explored prior to examining research-focused activities. The purpose of this project was to conduct a needs assessment focused on developing research capacity in the Republic of Georgia with an emphasis on workforce and network development. A case study approach guided by a needs assessment format. We conducted in-country, informal, semi-structured interviews in English with key informants and focus groups with faculty, students, and representatives of local non-governmental organizations. Purposive and snowball sampling approaches were used to recruit participants, with key informant interviews scheduled prior to arrival in country. Documents relevant to research capacity building were also included. Interview results were coded via content analysis. Final results were organized into a SWOT (strengths, weaknesses, opportunities, threat) analysis format, with the report shared with participants. There is widespread interest among students and faculty in Georgia around building research capacity. Lack of funding was identified by many informants as a barrier to research. Many critical research skills, such as proposal development, qualitative research skills, and statistical analysis, were reported as very limited. Participants expressed concerns about the ethics of research, with some suggesting that research is undertaken to punish or 'expose' subjects. However, students and faculty are highly motivated to improve their skills, are open to a variety of learning modalities, and have research priorities aligned with Georgian health needs. This study's findings indicate that while the Georgian research infrastructure needs further development, Georgian students and faculty are eager to supplement its gaps by improving their own skills. These findings are consistent with those seen in other developing country contexts. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Knowledge Structures of Entering Computer Networking Students and Their Instructors
ERIC Educational Resources Information Center
DiCerbo, Kristen E.
2007-01-01
Students bring prior knowledge to their learning experiences. This prior knowledge is known to affect how students encode and later retrieve new information learned. Teachers and content developers can use information about students' prior knowledge to create more effective lessons and materials. In many content areas, particularly the sciences,…
Nudging toward Inquiry: Awakening and Building upon Prior Knowledge
ERIC Educational Resources Information Center
Fontichiaro, Kristin, Comp.
2010-01-01
"Prior knowledge" (sometimes called schema or background knowledge) is information one already knows that helps him/her make sense of new information. New learning builds on existing prior knowledge. In traditional reporting-style research projects, students bypass this crucial step and plow right into answer-finding. It's no wonder that many…
Practical Weak-lensing Shear Measurement with Metacalibration
Sheldon, Erin S.; Huff, Eric M.
2017-05-19
We report that metacalibration is a recently introduced method to accurately measure weak gravitational lensing shear using only the available imaging data, without need for prior information about galaxy properties or calibration from simulations. The method involves distorting the image with a small known shear, and calculating the response of a shear estimator to that applied shear. The method was shown to be accurate in moderate-sized simulations with galaxy images that had relatively high signal-to-noise ratios, and without significant selection effects. In this work we introduce a formalism to correct for both shear response and selection biases. We also observemore » that for images with relatively low signal-to-noise ratios, the correlated noise that arises during the metacalibration process results in significant bias, for which we develop a simple empirical correction. To test this formalism, we created large image simulations based on both parametric models and real galaxy images, including tests with realistic point-spread functions. We varied the point-spread function ellipticity at the five-percent level. In each simulation we applied a small few-percent shear to the galaxy images. We introduced additional challenges that arise in real data, such as detection thresholds, stellar contamination, and missing data. We applied cuts on the measured galaxy properties to induce significant selection effects. Finally, using our formalism, we recovered the input shear with an accuracy better than a part in a thousand in all cases.« less
Hee, S.; Vázquez, J. A.; Handley, W. J.; ...
2016-12-01
Data-driven model-independent reconstructions of the dark energy equation of state w(z) are presented using Planck 2015 era CMB, BAO, SNIa and Lyman-α data. These reconstructions identify the w(z) behaviour supported by the data and show a bifurcation of the equation of state posterior in the range 1.5 < z < 3. Although the concordance ΛCDM model is consistent with the data at all redshifts in one of the bifurcated spaces, in the other a supernegative equation of state (also known as ‘phantom dark energy’) is identified within the 1.5σ confidence intervals of the posterior distribution. In order to identify themore » power of different datasets in constraining the dark energy equation of state, we use a novel formulation of the Kullback–Leibler divergence. Moreover, this formalism quantifies the information the data add when moving from priors to posteriors for each possible dataset combination. The SNIa and BAO datasets are shown to provide much more constraining power in comparison to the Lyman-α datasets. Furthermore, SNIa and BAO constrain most strongly around redshift range 0.1 - 0.5, whilst the Lyman-α data constrains weakly over a broader range. We do not attribute the supernegative favouring to any particular dataset, and note that the ΛCDM model was favoured at more than 2 log-units in Bayes factors over all the models tested despite the weakly preferred w(z) structure in the data.« less
A critical assessment of mortality statistics in Thailand: potential for improvements.
Tangcharoensathien, Viroj; Faramnuayphol, Pinij; Teokul, Waranya; Bundhamcharoen, Kanitta; Wibulpholprasert, Suwit
2006-01-01
This study evaluates the collection and flow of mortality and cause-of-death (COD) data in Thailand, identifying areas of weakness and presenting potential approaches to improve these statistics. Methods include systems analysis, literature review, and the application of the Health Metrics Network (HMN) self-assessment tool by key stakeholders. We identified two weaknesses underlying incompleteness of death registration and inaccuracy of COD attribution: problems in recording events or certifying deaths, and problems in transferring information from death certificates to death registers. Deaths occurring outside health facilities, representing 65% of all deaths in Thailand, contribute to the inaccuracy of cause-of-death data because they must be certified by village heads with limited knowledge and expertise in cause-of-death attribution. However, problems also exist with in-hospital cause-of-death certification by physicians. Priority should be given to training medical personnel in death certification, review of medical records by health personnel in district hospitals, and use of verbal autopsy techniques for assessing internal consistency. This should be coupled with stronger collaboration with district registrars for the 65% of deaths that occur outside hospitals. Training of physicians and data coders and harmonization of death certificates and registries would improve COD data for the 35% of deaths that take place in hospital. Public awareness of the importance of registering all deaths and the application of registration requirements prior to funerals would also improve coverage, though enforcement would be difficult. PMID:16583083
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hee, S.; Vázquez, J. A.; Handley, W. J.
Data-driven model-independent reconstructions of the dark energy equation of state w(z) are presented using Planck 2015 era CMB, BAO, SNIa and Lyman-α data. These reconstructions identify the w(z) behaviour supported by the data and show a bifurcation of the equation of state posterior in the range 1.5 < z < 3. Although the concordance ΛCDM model is consistent with the data at all redshifts in one of the bifurcated spaces, in the other a supernegative equation of state (also known as ‘phantom dark energy’) is identified within the 1.5σ confidence intervals of the posterior distribution. In order to identify themore » power of different datasets in constraining the dark energy equation of state, we use a novel formulation of the Kullback–Leibler divergence. Moreover, this formalism quantifies the information the data add when moving from priors to posteriors for each possible dataset combination. The SNIa and BAO datasets are shown to provide much more constraining power in comparison to the Lyman-α datasets. Furthermore, SNIa and BAO constrain most strongly around redshift range 0.1 - 0.5, whilst the Lyman-α data constrains weakly over a broader range. We do not attribute the supernegative favouring to any particular dataset, and note that the ΛCDM model was favoured at more than 2 log-units in Bayes factors over all the models tested despite the weakly preferred w(z) structure in the data.« less
Meyer, Holly S; Durning, Steven J; Sklar, David P; Maggio, Lauren A
2018-03-01
Manuscripts submitted to Academic Medicine (AM) undergo an internal editor review to determine whether they will be sent for external peer review. Increasingly, manuscripts are rejected at this early stage. This study seeks to inform scholars about common reasons for internal editor review rejections, increase transparency of the process, and provide suggestions for improving submissions. A mixed-methods approach was used to retrospectively analyze editors' free-text comments. Descriptive content analysis was performed of editors' comments for 369 manuscripts submitted between December 2014 and December 2015, and rejected prior to external peer review from AM. Comments were analyzed, categorized, and counted for explicit reasons for rejection. Nine categories of rejection reasons were identified: ineffective study question and/or design (338; 92%); suboptimal data collection process (180; 49%); weak discussion and/or conclusions (139; 37%); unimportant or irrelevant topic to the journal's mission (137; 37%); weak data analysis and/or presentation of results (120; 33%); text difficult to follow, to understand (89; 24%); inadequate or incomplete introduction (67; 18%); other publishing considerations (42; 11%); and issues with scientific conduct (20; 5%). Manuscripts had, on average, three or more reasons for rejection. Findings suggest that clear identification of a research question that is addressed by a well-designed study methodology on a topic aligned with the mission of the journal would address many of the problems that lead to rejection through the internal review process. The findings also align with research on external peer review.
Torres, Craig; Jones, Rachael; Boelter, Fred; Poole, James; Dell, Linda; Harper, Paul
2014-01-01
Bayesian Decision Analysis (BDA) uses Bayesian statistics to integrate multiple types of exposure information and classify exposures within the exposure rating categorization scheme promoted in American Industrial Hygiene Association (AIHA) publications. Prior distributions for BDA may be developed from existing monitoring data, mathematical models, or professional judgment. Professional judgments may misclassify exposures. We suggest that a structured qualitative risk assessment (QLRA) method can provide consistency and transparency in professional judgments. In this analysis, we use a structured QLRA method to define prior distributions (priors) for BDA. We applied this approach at three semiconductor facilities in South Korea, and present an evaluation of the performance of structured QLRA for determination of priors, and an evaluation of occupational exposures using BDA. Specifically, the structured QLRA was applied to chemical agents in similar exposure groups to identify provisional risk ratings. Standard priors were developed for each risk rating before review of historical monitoring data. Newly collected monitoring data were used to update priors informed by QLRA or historical monitoring data, and determine the posterior distribution. Exposure ratings were defined by the rating category with the highest probability--i.e., the most likely. We found the most likely exposure rating in the QLRA-informed priors to be consistent with historical and newly collected monitoring data, and the posterior exposure ratings developed with QLRA-informed priors to be equal to or greater than those developed with data-informed priors in 94% of comparisons. Overall, exposures at these facilities are consistent with well-controlled work environments. That is, the 95th percentile of exposure distributions are ≤50% of the occupational exposure limit (OEL) for all chemical-SEG combinations evaluated; and are ≤10% of the limit for 94% of chemical-SEG combinations evaluated.
NASA Astrophysics Data System (ADS)
Truckenbrodt, Sina C.; Gómez-Dans, José; Stelmaszczuk-Górska, Martyna A.; Chernetskiy, Maxim; Schmullius, Christiane C.
2017-04-01
Throughout the past decades various satellite sensors have been launched that record reflectance in the optical domain and facilitate comprehensive monitoring of the vegetation-covered land surface from space. The interaction of photons with the canopy, leaves and soil that determines the spectrum of reflected sunlight can be simulated with radiative transfer models (RTMs). The inversion of RTMs permits the derivation of state variables such as leaf area index (LAI) and leaf chlorophyll content from top-of-canopy reflectance. Space-borne data are, however, insufficient for an unambiguous derivation of state variables and additional constraints are required to resolve this ill-posed problem. Data assimilation techniques permit the conflation of various information with due allowance for associated uncertainties. The Earth Observation Land Data Assimilation System (EO-LDAS) integrates RTMs into a dynamic process model that describes the temporal evolution of state variables. In addition, prior information is included to further constrain the inversion and enhance the state variable derivation. In previous studies on EO-LDAS, prior information was represented by temporally constant values for all investigated state variables, while information about their phenological evolution was neglected. Here, we examine to what extent the implementation of prior information reflecting the phenological variability improves the performance of EO-LDAS with respect to the monitoring of crops on the agricultural Gebesee test site (Central Germany). Various routines for the generation of prior information are tested. This involves the usage of data on state variables that was acquired in previous years as well as the application of phenological models. The performance of EO-LDAS with the newly implemented prior information is tested based on medium resolution satellite imagery (e.g., RapidEye REIS, Sentinel-2 MSI, Landsat-7 ETM+ and Landsat-8 OLI). The predicted state variables are validated against in situ data from the Gebesee test site that were acquired with a weekly to fortnightly resolution throughout the growing seasons of 2010, 2013, 2014 and 2016. Furthermore, the results are compared with the outcome of using constant values as prior information. In this presentation, the EO-LDAS scheme and results obtained from different prior information are presented.
Robust anonymous authentication scheme for telecare medical information systems.
Xie, Qi; Zhang, Jun; Dong, Na
2013-04-01
Patient can obtain sorts of health-care delivery services via Telecare Medical Information Systems (TMIS). Authentication, security, patient's privacy protection and data confidentiality are important for patient or doctor accessing to Electronic Medical Records (EMR). In 2012, Chen et al. showed that Khan et al.'s dynamic ID-based authentication scheme has some weaknesses and proposed an improved scheme, and they claimed that their scheme is more suitable for TMIS. However, we show that Chen et al.'s scheme also has some weaknesses. In particular, Chen et al.'s scheme does not provide user's privacy protection and perfect forward secrecy, is vulnerable to off-line password guessing attack and impersonation attack once user's smart card is compromised. Further, we propose a secure anonymity authentication scheme to overcome their weaknesses even an adversary can know all information stored in smart card.
Predictive top-down integration of prior knowledge during speech perception.
Sohoglu, Ediz; Peelle, Jonathan E; Carlyon, Robert P; Davis, Matthew H
2012-06-20
A striking feature of human perception is that our subjective experience depends not only on sensory information from the environment but also on our prior knowledge or expectations. The precise mechanisms by which sensory information and prior knowledge are integrated remain unclear, with longstanding disagreement concerning whether integration is strictly feedforward or whether higher-level knowledge influences sensory processing through feedback connections. Here we used concurrent EEG and MEG recordings to determine how sensory information and prior knowledge are integrated in the brain during speech perception. We manipulated listeners' prior knowledge of speech content by presenting matching, mismatching, or neutral written text before a degraded (noise-vocoded) spoken word. When speech conformed to prior knowledge, subjective perceptual clarity was enhanced. This enhancement in clarity was associated with a spatiotemporal profile of brain activity uniquely consistent with a feedback process: activity in the inferior frontal gyrus was modulated by prior knowledge before activity in lower-level sensory regions of the superior temporal gyrus. In parallel, we parametrically varied the level of speech degradation, and therefore the amount of sensory detail, so that changes in neural responses attributable to sensory information and prior knowledge could be directly compared. Although sensory detail and prior knowledge both enhanced speech clarity, they had an opposite influence on the evoked response in the superior temporal gyrus. We argue that these data are best explained within the framework of predictive coding in which sensory activity is compared with top-down predictions and only unexplained activity propagated through the cortical hierarchy.
Benjamin, Aaron S.; Diaz, Michael; Matzen, Laura E.; Johnson, Benjamin
2011-01-01
Older adults exhibit a disproportionate deficit in their ability to recover contextual elements or source information about prior encounters with stimuli. A recent theoretical account, DRYAD (Benjamin, 2010), attributes this selective deficit to a global decrease in memory fidelity with age, moderated by weak representation of contextual information. The predictions of DRYAD are tested here in three experiments. We show that an age-related deficit obtains for whichever aspect of the stimulus subjects’ attention is directed away from during encoding (Experiment 1), suggesting a central role for attention in producing the age-related deficit in context. We also show that an analogous deficit can be elicited within young subjects with a manipulation of study time (Experiment 2), suggesting that any means of reducing memory fidelity yields an interaction of the same form as the age-related effect. Experiment 3 evaluates the critical prediction of DRYAD that endorsement probability in an exclusion task should vary nonmonotonically with memory strength. This prediction was confirmed by assessing the shape of the forgetting function in a continuous exclusion task. The results are consistent with the DRYAD account of aging and memory judgments and do not support the widely held view that aging entails the selective disruption of processes involved in encoding, storing, or retrieving contextual information. PMID:21875219
Benjamin, Aaron S; Diaz, Michael; Matzen, Laura E; Johnson, Benjamin
2012-06-01
Older adults exhibit a disproportionate deficit in their ability to recover contextual elements or source information about prior encounters with stimuli. A recent theoretical account, DRYAD, attributes this selective deficit to a global decrease in memory fidelity with age, moderated by weak representation of contextual information. The predictions of DRYAD are tested here in three experiments. We show that an age-related deficit obtains for whichever aspect of the stimulus subjects' attention is directed away from during encoding (Experiment 1), suggesting a central role for attention in producing the age-related deficit in context. We also show that an analogous deficit can be elicited within young subjects with a manipulation of study time (Experiment 2), suggesting that any means of reducing memory fidelity yields an interaction of the same form as the age-related effect. Experiment 3 evaluates the critical prediction of DRYAD that endorsement probability in an exclusion task should vary nonmonotonically with memory strength. This prediction was confirmed by assessing the shape of the forgetting function in a continuous exclusion task. The results are consistent with the DRYAD account of aging and memory judgments and do not support the widely held view that aging entails the selective disruption of processes involved in encoding, storing, or retrieving contextual information. PsycINFO Database Record (c) 2012 APA, all rights reserved
2012-01-01
Background An important question in the analysis of biochemical data is that of identifying subsets of molecular variables that may jointly influence a biological response. Statistical variable selection methods have been widely used for this purpose. In many settings, it may be important to incorporate ancillary biological information concerning the variables of interest. Pathway and network maps are one example of a source of such information. However, although ancillary information is increasingly available, it is not always clear how it should be used nor how it should be weighted in relation to primary data. Results We put forward an approach in which biological knowledge is incorporated using informative prior distributions over variable subsets, with prior information selected and weighted in an automated, objective manner using an empirical Bayes formulation. We employ continuous, linear models with interaction terms and exploit biochemically-motivated sparsity constraints to permit exact inference. We show an example of priors for pathway- and network-based information and illustrate our proposed method on both synthetic response data and by an application to cancer drug response data. Comparisons are also made to alternative Bayesian and frequentist penalised-likelihood methods for incorporating network-based information. Conclusions The empirical Bayes method proposed here can aid prior elicitation for Bayesian variable selection studies and help to guard against mis-specification of priors. Empirical Bayes, together with the proposed pathway-based priors, results in an approach with a competitive variable selection performance. In addition, the overall procedure is fast, deterministic, and has very few user-set parameters, yet is capable of capturing interplay between molecular players. The approach presented is general and readily applicable in any setting with multiple sources of biological prior knowledge. PMID:22578440
Bucci, Melanie E.; Callahan, Peggy; Koprowski, John L.; Polfus, Jean L.; Krausman, Paul R.
2015-01-01
Stable isotope analysis of diet has become a common tool in conservation research. However, the multiple sources of uncertainty inherent in this analysis framework involve consequences that have not been thoroughly addressed. Uncertainty arises from the choice of trophic discrimination factors, and for Bayesian stable isotope mixing models (SIMMs), the specification of prior information; the combined effect of these aspects has not been explicitly tested. We used a captive feeding study of gray wolves (Canis lupus) to determine the first experimentally-derived trophic discrimination factors of C and N for this large carnivore of broad conservation interest. Using the estimated diet in our controlled system and data from a published study on wild wolves and their prey in Montana, USA, we then investigated the simultaneous effect of discrimination factors and prior information on diet reconstruction with Bayesian SIMMs. Discrimination factors for gray wolves and their prey were 1.97‰ for δ13C and 3.04‰ for δ15N. Specifying wolf discrimination factors, as opposed to the commonly used red fox (Vulpes vulpes) factors, made little practical difference to estimates of wolf diet, but prior information had a strong effect on bias, precision, and accuracy of posterior estimates. Without specifying prior information in our Bayesian SIMM, it was not possible to produce SIMM posteriors statistically similar to the estimated diet in our controlled study or the diet of wild wolves. Our study demonstrates the critical effect of prior information on estimates of animal diets using Bayesian SIMMs, and suggests species-specific trophic discrimination factors are of secondary importance. When using stable isotope analysis to inform conservation decisions researchers should understand the limits of their data. It may be difficult to obtain useful information from SIMMs if informative priors are omitted and species-specific discrimination factors are unavailable. PMID:25803664
Derbridge, Jonathan J; Merkle, Jerod A; Bucci, Melanie E; Callahan, Peggy; Koprowski, John L; Polfus, Jean L; Krausman, Paul R
2015-01-01
Stable isotope analysis of diet has become a common tool in conservation research. However, the multiple sources of uncertainty inherent in this analysis framework involve consequences that have not been thoroughly addressed. Uncertainty arises from the choice of trophic discrimination factors, and for Bayesian stable isotope mixing models (SIMMs), the specification of prior information; the combined effect of these aspects has not been explicitly tested. We used a captive feeding study of gray wolves (Canis lupus) to determine the first experimentally-derived trophic discrimination factors of C and N for this large carnivore of broad conservation interest. Using the estimated diet in our controlled system and data from a published study on wild wolves and their prey in Montana, USA, we then investigated the simultaneous effect of discrimination factors and prior information on diet reconstruction with Bayesian SIMMs. Discrimination factors for gray wolves and their prey were 1.97‰ for δ13C and 3.04‰ for δ15N. Specifying wolf discrimination factors, as opposed to the commonly used red fox (Vulpes vulpes) factors, made little practical difference to estimates of wolf diet, but prior information had a strong effect on bias, precision, and accuracy of posterior estimates. Without specifying prior information in our Bayesian SIMM, it was not possible to produce SIMM posteriors statistically similar to the estimated diet in our controlled study or the diet of wild wolves. Our study demonstrates the critical effect of prior information on estimates of animal diets using Bayesian SIMMs, and suggests species-specific trophic discrimination factors are of secondary importance. When using stable isotope analysis to inform conservation decisions researchers should understand the limits of their data. It may be difficult to obtain useful information from SIMMs if informative priors are omitted and species-specific discrimination factors are unavailable.
Cannon, Jonathan
2017-01-01
Mutual information is a commonly used measure of communication between neurons, but little theory exists describing the relationship between mutual information and the parameters of the underlying neuronal interaction. Such a theory could help us understand how specific physiological changes affect the capacity of neurons to synaptically communicate, and, in particular, they could help us characterize the mechanisms by which neuronal dynamics gate the flow of information in the brain. Here we study a pair of linear-nonlinear-Poisson neurons coupled by a weak synapse. We derive an analytical expression describing the mutual information between their spike trains in terms of synapse strength, neuronal activation function, the time course of postsynaptic currents, and the time course of the background input received by the two neurons. This expression allows mutual information calculations that would otherwise be computationally intractable. We use this expression to analytically explore the interaction of excitation, information transmission, and the convexity of the activation function. Then, using this expression to quantify mutual information in simulations, we illustrate the information-gating effects of neural oscillations and oscillatory coherence, which may either increase or decrease the mutual information across the synapse depending on parameters. Finally, we show analytically that our results can quantitatively describe the selection of one information pathway over another when multiple sending neurons project weakly to a single receiving neuron.
Liang, Li-Jung; Weiss, Robert E; Redelings, Benjamin; Suchard, Marc A
2009-10-01
Statistical analyses of phylogenetic data culminate in uncertain estimates of underlying model parameters. Lack of additional data hinders the ability to reduce this uncertainty, as the original phylogenetic dataset is often complete, containing the entire gene or genome information available for the given set of taxa. Informative priors in a Bayesian analysis can reduce posterior uncertainty; however, publicly available phylogenetic software specifies vague priors for model parameters by default. We build objective and informative priors using hierarchical random effect models that combine additional datasets whose parameters are not of direct interest but are similar to the analysis of interest. We propose principled statistical methods that permit more precise parameter estimates in phylogenetic analyses by creating informative priors for parameters of interest. Using additional sequence datasets from our lab or public databases, we construct a fully Bayesian semiparametric hierarchical model to combine datasets. A dynamic iteratively reweighted Markov chain Monte Carlo algorithm conveniently recycles posterior samples from the individual analyses. We demonstrate the value of our approach by examining the insertion-deletion (indel) process in the enolase gene across the Tree of Life using the phylogenetic software BALI-PHY; we incorporate prior information about indels from 82 curated alignments downloaded from the BAliBASE database.
Nowakowska, Marzena
2017-04-01
The development of the Bayesian logistic regression model classifying the road accident severity is discussed. The already exploited informative priors (method of moments, maximum likelihood estimation, and two-stage Bayesian updating), along with the original idea of a Boot prior proposal, are investigated when no expert opinion has been available. In addition, two possible approaches to updating the priors, in the form of unbalanced and balanced training data sets, are presented. The obtained logistic Bayesian models are assessed on the basis of a deviance information criterion (DIC), highest probability density (HPD) intervals, and coefficients of variation estimated for the model parameters. The verification of the model accuracy has been based on sensitivity, specificity and the harmonic mean of sensitivity and specificity, all calculated from a test data set. The models obtained from the balanced training data set have a better classification quality than the ones obtained from the unbalanced training data set. The two-stage Bayesian updating prior model and the Boot prior model, both identified with the use of the balanced training data set, outperform the non-informative, method of moments, and maximum likelihood estimation prior models. It is important to note that one should be careful when interpreting the parameters since different priors can lead to different models. Copyright © 2017 Elsevier Ltd. All rights reserved.
Using expert knowledge for test linking.
Bolsinova, Maria; Hoijtink, Herbert; Vermeulen, Jorine Adinda; Béguin, Anton
2017-12-01
Linking and equating procedures are used to make the results of different test forms comparable. In the cases where no assumption of random equivalent groups can be made some form of linking design is used. In practice the amount of data available to link the two tests is often very limited due to logistic and security reasons, which affects the precision of linking procedures. This study proposes to enhance the quality of linking procedures based on sparse data by using Bayesian methods which combine the information in the linking data with background information captured in informative prior distributions. We propose two methods for the elicitation of prior knowledge about the difference in difficulty of two tests from subject-matter experts and explain how these results can be used in the specification of priors. To illustrate the proposed methods and evaluate the quality of linking with and without informative priors, an empirical example of linking primary school mathematics tests is presented. The results suggest that informative priors can increase the precision of linking without decreasing the accuracy. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Exploring Encoding and Retrieval Effects of Background Information on Text Memory
ERIC Educational Resources Information Center
Rawson, Katherine A.; Kintsch, Walter
2004-01-01
Two experiments were conducted (a) to evaluate how providing background information at test may benefit retrieval and (b) to further examine how providing background information prior to study influences encoding. Half of the participants read background information prior to study, and the other half did not. In each group, half were presented…
McCarron, C Elizabeth; Pullenayegum, Eleanor M; Thabane, Lehana; Goeree, Ron; Tarride, Jean-Eric
2013-04-01
Bayesian methods have been proposed as a way of synthesizing all available evidence to inform decision making. However, few practical applications of the use of Bayesian methods for combining patient-level data (i.e., trial) with additional evidence (e.g., literature) exist in the cost-effectiveness literature. The objective of this study was to compare a Bayesian cost-effectiveness analysis using informative priors to a standard non-Bayesian nonparametric method to assess the impact of incorporating additional information into a cost-effectiveness analysis. Patient-level data from a previously published nonrandomized study were analyzed using traditional nonparametric bootstrap techniques and bivariate normal Bayesian models with vague and informative priors. Two different types of informative priors were considered to reflect different valuations of the additional evidence relative to the patient-level data (i.e., "face value" and "skeptical"). The impact of using different distributions and valuations was assessed in a sensitivity analysis. Models were compared in terms of incremental net monetary benefit (INMB) and cost-effectiveness acceptability frontiers (CEAFs). The bootstrapping and Bayesian analyses using vague priors provided similar results. The most pronounced impact of incorporating the informative priors was the increase in estimated life years in the control arm relative to what was observed in the patient-level data alone. Consequently, the incremental difference in life years originally observed in the patient-level data was reduced, and the INMB and CEAF changed accordingly. The results of this study demonstrate the potential impact and importance of incorporating additional information into an analysis of patient-level data, suggesting this could alter decisions as to whether a treatment should be adopted and whether more information should be acquired.
Bayesian bivariate meta-analysis of diagnostic test studies with interpretable priors.
Guo, Jingyi; Riebler, Andrea; Rue, Håvard
2017-08-30
In a bivariate meta-analysis, the number of diagnostic studies involved is often very low so that frequentist methods may result in problems. Using Bayesian inference is particularly attractive as informative priors that add a small amount of information can stabilise the analysis without overwhelming the data. However, Bayesian analysis is often computationally demanding and the selection of the prior for the covariance matrix of the bivariate structure is crucial with little data. The integrated nested Laplace approximations method provides an efficient solution to the computational issues by avoiding any sampling, but the important question of priors remain. We explore the penalised complexity (PC) prior framework for specifying informative priors for the variance parameters and the correlation parameter. PC priors facilitate model interpretation and hyperparameter specification as expert knowledge can be incorporated intuitively. We conduct a simulation study to compare the properties and behaviour of differently defined PC priors to currently used priors in the field. The simulation study shows that the PC prior seems beneficial for the variance parameters. The use of PC priors for the correlation parameter results in more precise estimates when specified in a sensible neighbourhood around the truth. To investigate the usage of PC priors in practice, we reanalyse a meta-analysis using the telomerase marker for the diagnosis of bladder cancer and compare the results with those obtained by other commonly used modelling approaches. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Uncertainty plus prior equals rational bias: an intuitive Bayesian probability weighting function.
Fennell, John; Baddeley, Roland
2012-10-01
Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several nonexpected utility theories, including rank-dependent models and prospect theory; here, we propose a Bayesian approach to the probability weighting function and, with it, a psychological rationale. In the real world, uncertainty is ubiquitous and, accordingly, the optimal strategy is to combine probability statements with prior information using Bayes' rule. First, we show that any reasonable prior on probabilities leads to 2 of the observed effects; overweighting of low probabilities and underweighting of high probabilities. We then investigate 2 plausible kinds of priors: informative priors based on previous experience and uninformative priors of ignorance. Individually, these priors potentially lead to large problems of bias and inefficiency, respectively; however, when combined using Bayesian model comparison methods, both forms of prior can be applied adaptively, gaining the efficiency of empirical priors and the robustness of ignorance priors. We illustrate this for the simple case of generic good and bad options, using Internet blogs to estimate the relevant priors of inference. Given this combined ignorant/informative prior, the Bayesian probability weighting function is not only robust and efficient but also matches all of the major characteristics of the distortions found in empirical research. PsycINFO Database Record (c) 2012 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Simon, Patrick; Schneider, Peter
2017-08-01
In weak gravitational lensing, weighted quadrupole moments of the brightness profile in galaxy images are a common way to estimate gravitational shear. We have employed general adaptive moments (GLAM ) to study causes of shear bias on a fundamental level and for a practical definition of an image ellipticity. The GLAM ellipticity has useful properties for any chosen weight profile: the weighted ellipticity is identical to that of isophotes of elliptical images, and in absence of noise and pixellation it is always an unbiased estimator of reduced shear. We show that moment-based techniques, adaptive or unweighted, are similar to a model-based approach in the sense that they can be seen as imperfect fit of an elliptical profile to the image. Due to residuals in the fit, moment-based estimates of ellipticities are prone to underfitting bias when inferred from observed images. The estimation is fundamentally limited mainly by pixellation which destroys information on the original, pre-seeing image. We give an optimised estimator for the pre-seeing GLAM ellipticity and quantify its bias for noise-free images. To deal with images where pixel noise is prominent, we consider a Bayesian approach to infer GLAM ellipticity where, similar to the noise-free case, the ellipticity posterior can be inconsistent with the true ellipticity if we do not properly account for our ignorance about fit residuals. This underfitting bias, quantified in the paper, does not vary with the overall noise level but changes with the pre-seeing brightness profile and the correlation or heterogeneity of pixel noise over the image. Furthermore, when inferring a constant ellipticity or, more relevantly, constant shear from a source sample with a distribution of intrinsic properties (sizes, centroid positions, intrinsic shapes), an additional, now noise-dependent bias arises towards low signal-to-noise if incorrect prior densities for the intrinsic properties are used. We discuss the origin of this prior bias. With regard to a fully-Bayesian lensing analysis, we point out that passing tests with source samples subject to constant shear may not be sufficient for an analysis of sources with varying shear.
[Inferential evaluation of intimacy based on observation of interpersonal communication].
Kimura, Masanori
2015-06-01
How do people inferentially evaluate others' levels of intimacy with friends? We examined the inferential evaluation of intimacy based on the observation of interpersonal communication. In Experiment 1, participants (N = 41) responded to questions after observing conversations between friends. Results indicated that participants inferentially evaluated not only goodness of communication, but also intimacy between friends, using an expressivity heuristic approach. In Experiment 2, we investigated how inferential evaluation of intimacy was affected by prior information about relationships and by individual differences in face-to-face interactional ability. Participants (N = 64) were divided into prior- and no-prior-information groups and all performed the same task as in Experiment 1. Additionally, their interactional ability was assessed. In the prior-information group, individual differences had no effect on inferential evaluation of intimacy. On the other hand, in the no-prior-information group, face-to-face interactional ability partially influenced evaluations of intimacy. Finally, we discuss the fact that to understand one's social environment, it is important to observe others' interpersonal communications.
Objective Bayesian analysis of neutrino masses and hierarchy
NASA Astrophysics Data System (ADS)
Heavens, Alan F.; Sellentin, Elena
2018-04-01
Given the precision of current neutrino data, priors still impact noticeably the constraints on neutrino masses and their hierarchy. To avoid our understanding of neutrinos being driven by prior assumptions, we construct a prior that is mathematically minimally informative. Using the constructed uninformative prior, we find that the normal hierarchy is favoured but with inconclusive posterior odds of 5.1:1. Better data is hence needed before the neutrino masses and their hierarchy can be well constrained. We find that the next decade of cosmological data should provide conclusive evidence if the normal hierarchy with negligible minimum mass is correct, and if the uncertainty in the sum of neutrino masses drops below 0.025 eV. On the other hand, if neutrinos obey the inverted hierarchy, achieving strong evidence will be difficult with the same uncertainties. Our uninformative prior was constructed from principles of the Objective Bayesian approach. The prior is called a reference prior and is minimally informative in the specific sense that the information gain after collection of data is maximised. The prior is computed for the combination of neutrino oscillation data and cosmological data and still applies if the data improve.
Teacher–student relationship at university: an important yet under-researched field
Hagenauer, Gerda; Volet, Simone E.
2014-01-01
This article reviews the extant research on the relationship between students and teachers in higher education across three main areas: the quality of this relationship, its consequences and its antecedents. The weaknesses and gaps in prior research are highlighted and the importance of addressing the multi-dimensional and context-bound nature of teacher–student relationships is proposed. A possible agenda for future research is outlined. PMID:27226693
Toward an Empirically-Based Parametric Explosion Spectral Model
2010-09-01
estimated (Richards and Kim, 2009). This archive could potentially provide 200 recordings of explosions at Semipalatinsk Test Site of the former Soviet...estimates of explosion yield, and prior work at the Nevada Test Site (NTS) (e.g., Walter et al., 1995) has found that explosions in weak materials have...2007). Corner frequency scaling of regional seismic phases for underground nuclear explosions at the Nevada Test Site , Bull. Seismol. Soc. Am. 97
Calibrating the Planck cluster mass scale with CLASH
NASA Astrophysics Data System (ADS)
Penna-Lima, M.; Bartlett, J. G.; Rozo, E.; Melin, J.-B.; Merten, J.; Evrard, A. E.; Postman, M.; Rykoff, E.
2017-08-01
We determine the mass scale of Planck galaxy clusters using gravitational lensing mass measurements from the Cluster Lensing And Supernova survey with Hubble (CLASH). We have compared the lensing masses to the Planck Sunyaev-Zeldovich (SZ) mass proxy for 21 clusters in common, employing a Bayesian analysis to simultaneously fit an idealized CLASH selection function and the distribution between the measured observables and true cluster mass. We used a tiered analysis strategy to explicitly demonstrate the importance of priors on weak lensing mass accuracy. In the case of an assumed constant bias, bSZ, between true cluster mass, M500, and the Planck mass proxy, MPL, our analysis constrains 1-bSZ = 0.73 ± 0.10 when moderate priors on weak lensing accuracy are used, including a zero-mean Gaussian with standard deviation of 8% to account for possible bias in lensing mass estimations. Our analysis explicitly accounts for possible selection bias effects in this calibration sourced by the CLASH selection function. Our constraint on the cluster mass scale is consistent with recent results from the Weighing the Giants program and the Canadian Cluster Comparison Project. It is also consistent, at 1.34σ, with the value needed to reconcile the Planck SZ cluster counts with Planck's base ΛCDM model fit to the primary cosmic microwave background anisotropies.
Cycle life test. [of secondary spacecraft cells
NASA Technical Reports Server (NTRS)
Harkness, J. D.
1977-01-01
Statistical information concerning cell performance characteristics and limitations of secondary spacecraft cells is presented. Weaknesses in cell design as well as battery weaknesses encountered in various satellite programs are reported. Emphasis is placed on improving the reliability of space batteries.
The Critical Role of Retrieval Processes in Release from Proactive Interference
ERIC Educational Resources Information Center
Bauml, Karl-Heinz T.; Kliegl, Oliver
2013-01-01
Proactive interference (PI) refers to the finding that memory for recently studied (target) information can be vastly impaired by the previous study of other (nontarget) information. PI can be reduced in a number of ways, for instance, by directed forgetting of the prior nontarget information, the testing of the prior nontarget information, or an…
Data Structures in Natural Computing: Databases as Weak or Strong Anticipatory Systems
NASA Astrophysics Data System (ADS)
Rossiter, B. N.; Heather, M. A.
2004-08-01
Information systems anticipate the real world. Classical databases store, organise and search collections of data of that real world but only as weak anticipatory information systems. This is because of the reductionism and normalisation needed to map the structuralism of natural data on to idealised machines with von Neumann architectures consisting of fixed instructions. Category theory developed as a formalism to explore the theoretical concept of naturality shows that methods like sketches arising from graph theory as only non-natural models of naturality cannot capture real-world structures for strong anticipatory information systems. Databases need a schema of the natural world. Natural computing databases need the schema itself to be also natural. Natural computing methods including neural computers, evolutionary automata, molecular and nanocomputing and quantum computation have the potential to be strong. At present they are mainly at the stage of weak anticipatory systems.
Strength matters: Tie strength as a causal driver of networks' information benefits.
Kim, Minjae; Fernandez, Roberto M
2017-07-01
Studies of social networks have often taken the existence of a social tie as a proxy for the transmission of information. However, other studies of social networks in the labor market propose that the likelihood of information transmission might depend on strength of the tie; and that tie strength is a potentially important source of the tie's value. After all, even if job seekers have social ties to those who have valuable job information, the seekers will gain little information benefit when the ties do not actually transmit the information. This paper clarifies the conditions under which social ties might provide information benefits. We use a survey vignette experiment and ask MBA students about their likelihood of relaying job information via strong ties (to friends) or weak ties (to acquaintances), holding constant the structural locations spanned by the tie and job seekers' fit with the job. The results support the claim that strength of tie has a causal effect on the chances of information transmission: potential referrers are more likely to relay job information to their friends than to acquaintances. The larger implication of these findings is that whatever benefits there might be to using weak ties to reach distant non-redundant information during job search, these benefits need to be considered against the likely fact that people connected via weak ties are less likely to actually share information about job opportunities than are people to whom the job seeker is strongly tied. Copyright © 2016 Elsevier Inc. All rights reserved.
Karvelis, Povilas; Seitz, Aaron R; Lawrie, Stephen M; Seriès, Peggy
2018-05-14
Recent theories propose that schizophrenia/schizotypy and autistic spectrum disorder are related to impairments in Bayesian inference that is, how the brain integrates sensory information (likelihoods) with prior knowledge. However existing accounts fail to clarify: (i) how proposed theories differ in accounts of ASD vs. schizophrenia and (ii) whether the impairments result from weaker priors or enhanced likelihoods. Here, we directly address these issues by characterizing how 91 healthy participants, scored for autistic and schizotypal traits, implicitly learned and combined priors with sensory information. This was accomplished through a visual statistical learning paradigm designed to quantitatively assess variations in individuals' likelihoods and priors. The acquisition of the priors was found to be intact along both traits spectra. However, autistic traits were associated with more veridical perception and weaker influence of expectations. Bayesian modeling revealed that this was due, not to weaker prior expectations, but to more precise sensory representations. © 2018, Karvelis et al.
Can a future choice affect a past measurement’s outcome?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aharonov, Yakir; Schmid College of Science, Chapman University, Orange, CA 92866; Iyar, The Israeli Institute for Advanced Research, Rehovot
2015-04-15
An EPR experiment is studied where each particle within the entangled pair undergoes a few weak measurements (WMs) along some pre-set spin orientations, with the outcomes individually recorded. Then the particle undergoes one strong measurement along an orientation chosen at the last moment. Bell-inequality violation is expected between the two final measurements within each EPR pair. At the same time, statistical agreement is expected between these strong measurements and the earlier weak ones performed on that pair. A contradiction seemingly ensues: (i) Bell’s theorem forbids spin values to exist prior to the choice of the orientation measured; (ii) A weakmore » measurement is not supposed to determine the outcome of a successive strong one; and indeed (iii) Almost no disentanglement is inflicted by the WMs; and yet (iv) The outcomes of weak measurements statistically agree with those of the strong ones, suggesting the existence of pre-determined values, in contradiction with (i). Although the conflict can be solved by mere mitigation of the above restrictions, the most reasonable resolution seems to be that of the Two-State-Vector Formalism (TSVF), namely, that the choice of the experimenter has been encrypted within the weak measurement’s outcomes, even before the experimenters themselves know what their choice will be.« less
Unusual neurological syndrome induced by atmospheric pressure change.
Ptak, Judy A; Yazinski, Nancy A; Block, Clay A; Buckey, Jay C
2013-05-01
We describe a case of a 46-yr-old female who developed hypertension, tachycardia, dysarthria, and leg weakness provoked by pressure changes associated with flying. Typically during the landing phase of flight, she would feel dizzy and note that she had difficulty with speech and leg weakness. After the flight the leg weakness persisted for several days. The symptoms were mitigated when she took a combined alpha-beta blocker (labetalol) prior to the flight. To determine if these symptoms were related to atmospheric pressure change, she was referred for testing in a hyperbaric chamber. She was exposed to elevated atmospheric pressure (maximum 1.2 ATA) while her heart rate and blood pressure were monitored. Within 1 min she developed tachycardia and hypertension. She also quickly developed slurred speech, left arm and leg weakness, and sensory changes in her left leg. She was returned to sea level pressure and her symptoms gradually improved. A full neurological workup has revealed no explanation for these findings. She has no air collections, cysts, or other anatomic findings that could be sensitive to atmospheric pressure change. The pattern is most consistent with a vascular event stimulated by altitude exposure. This case suggests that atmospheric pressure change can produce neurological symptoms, although the mechanism is unknown.
Self-prior strategy for organ reconstruction in fluorescence molecular tomography
Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen
2017-01-01
The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy. PMID:29082094
Self-prior strategy for organ reconstruction in fluorescence molecular tomography.
Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen
2017-10-01
The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy.
Elvidge, C K; Macnaughton, C J; Brown, G E
2013-05-01
Prey incorporate multiple forms of publicly available information on predation risk into threat-sensitive antipredator behaviours. Changes in information availability have previously been demonstrated to elicit transient alterations in behavioural patterns, while the effects of long-term deprivation of particular forms of information remain largely unexplored. Damage-released chemical alarm cues from the epidermis of fishes are rendered non-functional under weakly acidic conditions (pH < 6.6), depriving fish of an important source of information on predation risk in acidified waterbodies. We addressed the effects of long-term deprivation on the antipredator responses to different combinations of chemical and visual threat cues via in situ observations of wild, free-swimming 0(+) Atlantic salmon (Salmo salar) fry in four neutral and four weakly acidic nursery streams. In addition, a cross-population transplant experiment and natural interannual variation in acidity enabled the examination of provenance and environment as causes of the observed differences in response. Fish living under weakly acidic conditions demonstrate significantly greater or hypersensitive antipredator responses to visual cues compared to fish under neutral conditions. Under neutral conditions, fish demonstrate complementary (additive or synergistic) effects of paired visual and chemical cues consistent with threat-sensitive responses. Cross-population transplants and interannual comparisons of responses strongly support the conclusion that differences in antipredator responses between neutral and weakly acidic streams result from the loss of chemical information on predation risk, as opposed to population-derived differences in behaviours.
Cortical plasticity as a mechanism for storing Bayesian priors in sensory perception.
Köver, Hania; Bao, Shaowen
2010-05-05
Human perception of ambiguous sensory signals is biased by prior experiences. It is not known how such prior information is encoded, retrieved and combined with sensory information by neurons. Previous authors have suggested dynamic encoding mechanisms for prior information, whereby top-down modulation of firing patterns on a trial-by-trial basis creates short-term representations of priors. Although such a mechanism may well account for perceptual bias arising in the short-term, it does not account for the often irreversible and robust changes in perception that result from long-term, developmental experience. Based on the finding that more frequently experienced stimuli gain greater representations in sensory cortices during development, we reasoned that prior information could be stored in the size of cortical sensory representations. For the case of auditory perception, we use a computational model to show that prior information about sound frequency distributions may be stored in the size of primary auditory cortex frequency representations, read-out by elevated baseline activity in all neurons and combined with sensory-evoked activity to generate a perception that conforms to Bayesian integration theory. Our results suggest an alternative neural mechanism for experience-induced long-term perceptual bias in the context of auditory perception. They make the testable prediction that the extent of such perceptual prior bias is modulated by both the degree of cortical reorganization and the magnitude of spontaneous activity in primary auditory cortex. Given that cortical over-representation of frequently experienced stimuli, as well as perceptual bias towards such stimuli is a common phenomenon across sensory modalities, our model may generalize to sensory perception, rather than being specific to auditory perception.
Cooley, Richard L.
1983-01-01
This paper investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. First, if the parameters are properly scaled, linearized expressions for the mean square error (MSE) in parameter estimates of a nonlinear model will often behave very nearly as if the model were linear. Second, by using prior information, the MSE in properly scaled parameters can be reduced greatly over the MSE of ordinary least squares estimates of parameters. Third, plots of estimated MSE and the estimated standard deviation of MSE versus an auxiliary parameter (the ridge parameter) specifying the degree of influence of the prior information on regression results can help determine the potential for improvement of parameter estimates. Fourth, proposed criteria can be used to make appropriate choices for the ridge parameter and another parameter expressing degree of overall bias in the prior information. Results of a case study of Truckee Meadows, Reno-Sparks area, Washoe County, Nevada, conform closely to the results of the hypothetical problem. In the Truckee Meadows case, incorporation of prior information did not greatly change the parameter estimates from those obtained by ordinary least squares. However, the analysis showed that both sets of estimates are more reliable than suggested by the standard errors from ordinary least squares.
Heuristics as Bayesian inference under extreme priors.
Parpart, Paula; Jones, Matt; Love, Bradley C
2018-05-01
Simple heuristics are often regarded as tractable decision strategies because they ignore a great deal of information in the input data. One puzzle is why heuristics can outperform full-information models, such as linear regression, which make full use of the available information. These "less-is-more" effects, in which a relatively simpler model outperforms a more complex model, are prevalent throughout cognitive science, and are frequently argued to demonstrate an inherent advantage of simplifying computation or ignoring information. In contrast, we show at the computational level (where algorithmic restrictions are set aside) that it is never optimal to discard information. Through a formal Bayesian analysis, we prove that popular heuristics, such as tallying and take-the-best, are formally equivalent to Bayesian inference under the limit of infinitely strong priors. Varying the strength of the prior yields a continuum of Bayesian models with the heuristics at one end and ordinary regression at the other. Critically, intermediate models perform better across all our simulations, suggesting that down-weighting information with the appropriate prior is preferable to entirely ignoring it. Rather than because of their simplicity, our analyses suggest heuristics perform well because they implement strong priors that approximate the actual structure of the environment. We end by considering how new heuristics could be derived by infinitely strengthening the priors of other Bayesian models. These formal results have implications for work in psychology, machine learning and economics. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
2011-03-09
effective oversight of federal government programs and policies. Over the years, certain material weaknesses in internal control over...ineffective process for preparing the consolidated financial statements. In addition to the material weaknesses underlying these major impediments, GAO...noted material weaknesses involving billions of dollars in improper payments, information security, and tax collection activities. With regard to the
[Patient information prior to sterilization].
Rasmussen, O V; Henriksen, L O; Baldur, B; Hansen, T
1992-09-14
The law in Denmark prescribes that the patient and the general practitioner to whom the patient directs his or her request for sterilization are obliged to confirm by their signatures that the patient has received information about sterilization, its risk and consequences. We asked 97 men and 96 women, if they had received this information prior to their sterilization. They were also asked about their knowledge about sterilization. 54% of the women and 35% of the men indicated that they had not received information. Only few of these wished further information by the hospital doctor. Knowledge about sterilization was good. It is concluded that the information to the patient prior to sterilization is far from optimal. The patients' signature confirming verbal information is not a sufficient safeguard. We recommend, among other things, that the patient should receive written information and that both the general practitioner and the hospital responsible for the operation should ensure that optimal information is received by the patient.
An Intervention and Assessment to Improve Information Literacy
ERIC Educational Resources Information Center
Scharf, Davida
2013-01-01
Purpose: The goal of the study was to test an intervention using a brief essay as an instrument for evaluating higher-order information literacy skills in college students, while accounting for prior conditions such as socioeconomic status and prior academic achievement, and identify other predictors of information literacy through an evaluation…
Quantum Counterfactual Information Transmission Without a Weak Trace
NASA Astrophysics Data System (ADS)
Arvidsson Shukur, David; Barnes, Crispin
The classical theories of communication rely on the assumption that there has to be a flow of particles from Bob to Alice in order for him to send a message to her. We have developed a quantum protocol that allows Alice to perceive Bob's message ``counterfactually''. That is, without Alice receiving any particles that have interacted with Bob. By utilising a setup built on results from interaction-free measurements and the quantum Zeno effect, we outline a communication protocol in which the information travels in the opposite direction of the emitted particles. In comparison to previous attempts on such protocols, this one is such that a weak measurement at the message source would not leave a weak trace that could be detected by Alice's receiver. Whilst some interaction-free schemes require a large number of carefully aligned beam-splitters, our protocol is realisable with two or more beam-splitters. Furthermore, we outline how Alice's obtained classical Fisher information between a weak variable at Bob's laboratory is negligible in our scheme. We demonstrate this protocol by numerically solving the time-dependent Schrödinger Equation (TDSE) for a Hamiltonian that implements this quantum counterfactual phenomenon.
Conditional screening for ultra-high dimensional covariates with survival outcomes
Hong, Hyokyoung G.; Li, Yi
2017-01-01
Identifying important biomarkers that are predictive for cancer patients’ prognosis is key in gaining better insights into the biological influences on the disease and has become a critical component of precision medicine. The emergence of large-scale biomedical survival studies, which typically involve excessive number of biomarkers, has brought high demand in designing efficient screening tools for selecting predictive biomarkers. The vast amount of biomarkers defies any existing variable selection methods via regularization. The recently developed variable screening methods, though powerful in many practical setting, fail to incorporate prior information on the importance of each biomarker and are less powerful in detecting marginally weak while jointly important signals. We propose a new conditional screening method for survival outcome data by computing the marginal contribution of each biomarker given priorily known biological information. This is based on the premise that some biomarkers are known to be associated with disease outcomes a priori. Our method possesses sure screening properties and a vanishing false selection rate. The utility of the proposal is further confirmed with extensive simulation studies and analysis of a diffuse large B-cell lymphoma dataset. We are pleased to dedicate this work to Jack Kalbfleisch, who has made instrumental contributions to the development of modern methods of analyzing survival data. PMID:27933468
Waddington, I; Roderick, M; Naik, R
2001-01-01
Objective—To examine the methods of appointment, experience, and qualifications of club doctors and physiotherapists in professional football. Methods—Semistructured tape recorded interviews with 12 club doctors, 10 club physiotherapists, and 27 current and former players. A questionnaire was also sent to 90 club doctors; 58 were returned. Results—In almost all clubs, methods of appointment of doctors are informal and reflect poor employment practice: posts are rarely advertised and many doctors are appointed on the basis of personal contacts and without interview. Few club doctors had prior experience or qualifications in sports medicine and very few have a written job description. The club doctor is often not consulted about the appointment of the physiotherapist; physiotherapists are usually appointed informally, often without interview, and often by the manager without involving anyone who is qualified in medicine or physiotherapy. Half of all clubs do not have a qualified (chartered) physiotherapist; such unqualified physiotherapists are in a weak position to resist threats to their clinical autonomy, particularly those arising from managers' attempts to influence clinical decisions. Conclusions—Almost all aspects of the appointment of club doctors and physiotherapists need careful re-examination. Key Words: football clubs; doctors; physiotherapists; qualifications PMID:11157462
Herbst de Cortina, Sasha; Arora, Gitanjli; Wells, Traci; Hoffman, Risa M
2016-03-01
Given the lack of a standardized approach to medical student global health predeparture preparation, we evaluated an in-person, interactive predeparture orientation (PDO) at the University of California Los Angeles (UCLA) to understand program strengths, weaknesses, and areas for improvement. We administered anonymous surveys to assess the structure and content of the PDO and also surveyed a subset of students after travel on the utility of the PDO. We used Fisher's exact test to evaluate the association between prior global health experience and satisfaction with the PDO. One hundred and five students attended the PDO between 2010 and 2014 and completed the survey. One hundred and four students (99.0%) reported learning new information. Major strengths included faculty mentorship (N = 38, 19.7%), opportunities to interact with the UCLA global health community (N = 34, 17.6%), and sharing global health experiences (N = 32, 16.6%). Of students surveyed after their elective, 94.4% (N = 51) agreed or strongly agreed that the PDO provided effective preparation. Students with prior global health experience found the PDO to be as useful as students without experience (92.7% versus 94.4%, P = 1.0). On the basis of these findings, we believe that a well-composed PDO is beneficial for students participating in global health experiences and recommend further comparative studies of PDO content and delivery. © The American Society of Tropical Medicine and Hygiene.
Method for loading lipid like vesicles with drugs of other chemicals
Mehlhorn, R.J.
1998-06-09
A method for accumulating drugs or other chemicals within synthetic, lipid-like vesicles by means of a pH gradient imposed on the vesicles just prior to use is described. The method is suited for accumulating molecules with basic or acid moieties which are permeable to the vesicles membranes in their uncharged form and for molecules that contain charge moieties that are hydrophobic ions and can therefore cross the vesicle membranes in their charged form. The method is advantageous over prior art methods for encapsulating biologically active materials within vesicles in that is achieves very high degrees of loading with simple procedures that are economical and require little technical expertise, furthermore kits which can be stored for prolonged periods prior to use without impairment of the capacity to achieve drug accumulation are described. A related application of the method consists of using this technology to detoxify animals that have been exposed to poisons with basic, weak acid or hydrophobic charge groups within their molecular structures. 2 figs.
Method for loading lipid like vesicles with drugs of other chemicals
Mehlhorn, Rolf Joachim
1998-01-01
A method for accumulating drugs or other chemicals within synthetic, lipid-like vesicles by means of a pH gradient imposed on the vesicles just prior to use is described. The method is suited for accumulating molecules with basic or acid moieties which are permeable to the vesicles membranes in their uncharged form and for molecules that contain charge moieties that are hydrophobic ions and can therefore cross the vesicle membranes in their charged form. The method is advantageous over prior art methods for encapsulating biologically active materials within vesicles in that is achieves very high degrees of loading with simple procedures that are economical and require little technical expertise, furthermore kits which can be stored for prolonged periods prior to use without impairment of the capacity to achieve drug accumulation are described. A related application of the method consists of using this technology to detoxify animals that have been exposed to poisons with basic, weak acid or hydrophobic charge groups within their molecular structures.
Method of detoxifying animal suffering from overdose
Mehlhorn, Rolf J.
1997-01-01
A method for accumulating drugs or other chemicals within synthetic, lipid-like vesicles by means of a pH gradient imposed on the vesicles just prior to use is described. The method is suited for accumulating molecules with basic or acid moieties which are permeable to the vesicles membranes in their uncharged form and for molecules that contain charge moieties that are hydrophobic ions and can therefore cross the vesicle membranes in their charged form. The method is advantageous over prior art methods for encapsulating biologically active materials within vesicles in that it achieves very high degrees of loading with simple procedures that are economical and require little technical expertise, furthermore kits which can be stored for prolonged periods prior to use without impairment of the capacity to achieve drug accumulation are described. A related application of the method consists of using this technology to detoxify animals that have been exposed to poisons with basic, weak acid or hydrophobic charge groups within their molecular structure.
Dissecting effects of complex mixtures: who's afraid of informative priors?
Thomas, Duncan C; Witte, John S; Greenland, Sander
2007-03-01
Epidemiologic studies commonly investigate multiple correlated exposures, which are difficult to analyze appropriately. Hierarchical modeling provides a promising approach for analyzing such data by adding a higher-level structure or prior model for the exposure effects. This prior model can incorporate additional information on similarities among the correlated exposures and can be parametric, semiparametric, or nonparametric. We discuss the implications of applying these models and argue for their expanded use in epidemiology. While a prior model adds assumptions to the conventional (first-stage) model, all statistical methods (including conventional methods) make strong intrinsic assumptions about the processes that generated the data. One should thus balance prior modeling assumptions against assumptions of validity, and use sensitivity analyses to understand their implications. In doing so - and by directly incorporating into our analyses information from other studies or allied fields - we can improve our ability to distinguish true causes of disease from noise and bias.
Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances.
Böing-Messing, Florian; Mulder, Joris
2018-05-03
In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.
The Power Prior: Theory and Applications
Ibrahim, Joseph G.; Chen, Ming-Hui; Gwon, Yeongjin; Chen, Fang
2015-01-01
The power prior has been widely used in many applications covering a large number of disciplines. The power prior is intended to be an informative prior constructed from historical data. It has been used in clinical trials, genetics, health care, psychology, environmental health, engineering, economics, and business. It has also been applied for a wide variety of models and settings, both in the experimental design and analysis contexts. In this review article, we give an A to Z exposition of the power prior and its applications to date. We review its theoretical properties, variations in its formulation, statistical contexts for which it has been used, applications, and its advantages over other informative priors. We review models for which it has been used, including generalized linear models, survival models, and random effects models. Statistical areas where the power prior has been used include model selection, experimental design, hierarchical modeling, and conjugate priors. Prequentist properties of power priors in posterior inference are established and a simulation study is conducted to further examine the empirical performance of the posterior estimates with power priors. Real data analyses are given illustrating the power prior as well as the use of the power prior in the Bayesian design of clinical trials. PMID:26346180
Francoeur, Richard B
2015-01-01
The majority of patients with advanced cancer experience symptom pairs or clusters among pain, fatigue, and insomnia. Improved methods are needed to detect and interpret interactions among symptoms or diesease markers to reveal influential pairs or clusters. In prior work, I developed and validated sequential residual centering (SRC), a method that improves the sensitivity of multiple regression to detect interactions among predictors, by conditioning for multicollinearity (shared variation) among interactions and component predictors. Using a hypothetical three-way interaction among pain, fatigue, and sleep to predict depressive affect, I derive and explain SRC multiple regression. Subsequently, I estimate raw and SRC multiple regressions using real data for these symptoms from 268 palliative radiation outpatients. Unlike raw regression, SRC reveals that the three-way interaction (pain × fatigue/weakness × sleep problems) is statistically significant. In follow-up analyses, the relationship between pain and depressive affect is aggravated (magnified) within two partial ranges: 1) complete-to-some control over fatigue/weakness when there is complete control over sleep problems (ie, a subset of the pain-fatigue/weakness symptom pair), and 2) no control over fatigue/weakness when there is some-to-no control over sleep problems (ie, a subset of the pain-fatigue/weakness-sleep problems symptom cluster). Otherwise, the relationship weakens (buffering) as control over fatigue/weakness or sleep problems diminishes. By reducing the standard error, SRC unmasks a three-way interaction comprising a symptom pair and cluster. Low-to-moderate levels of the moderator variable for fatigue/weakness magnify the relationship between pain and depressive affect. However, when the comoderator variable for sleep problems accompanies fatigue/weakness, only frequent or unrelenting levels of both symptoms magnify the relationship. These findings suggest that a countervailing mechanism involving depressive affect could account for the effectiveness of a cognitive behavioral intervention to reduce the severity of a pain, fatigue, and sleep disturbance cluster in a previous randomized trial.
Lateral orbitofrontal cortex anticipates choices and integrates prior with current information
Nogueira, Ramon; Abolafia, Juan M.; Drugowitsch, Jan; Balaguer-Ballester, Emili; Sanchez-Vives, Maria V.; Moreno-Bote, Rubén
2017-01-01
Adaptive behavior requires integrating prior with current information to anticipate upcoming events. Brain structures related to this computation should bring relevant signals from the recent past into the present. Here we report that rats can integrate the most recent prior information with sensory information, thereby improving behavior on a perceptual decision-making task with outcome-dependent past trial history. We find that anticipatory signals in the orbitofrontal cortex about upcoming choice increase over time and are even present before stimulus onset. These neuronal signals also represent the stimulus and relevant second-order combinations of past state variables. The encoding of choice, stimulus and second-order past state variables resides, up to movement onset, in overlapping populations. The neuronal representation of choice before stimulus onset and its build-up once the stimulus is presented suggest that orbitofrontal cortex plays a role in transforming immediate prior and stimulus information into choices using a compact state-space representation. PMID:28337990
Superposing pure quantum states with partial prior information
NASA Astrophysics Data System (ADS)
Dogra, Shruti; Thomas, George; Ghosh, Sibasish; Suter, Dieter
2018-05-01
The principle of superposition is an intriguing feature of quantum mechanics, which is regularly exploited in many different circumstances. A recent work [M. Oszmaniec et al., Phys. Rev. Lett. 116, 110403 (2016), 10.1103/PhysRevLett.116.110403] shows that the fundamentals of quantum mechanics restrict the process of superimposing two unknown pure states, even though it is possible to superimpose two quantum states with partial prior knowledge. The prior knowledge imposes geometrical constraints on the choice of input states. We discuss an experimentally feasible protocol to superimpose multiple pure states of a d -dimensional quantum system and carry out an explicit experimental realization for two single-qubit pure states with partial prior information on a two-qubit NMR quantum information processor.
Bias in diet determination: incorporating traditional methods in Bayesian mixing models.
Franco-Trecu, Valentina; Drago, Massimiliano; Riet-Sapriza, Federico G; Parnell, Andrew; Frau, Rosina; Inchausti, Pablo
2013-01-01
There are not "universal methods" to determine diet composition of predators. Most traditional methods are biased because of their reliance on differential digestibility and the recovery of hard items. By relying on assimilated food, stable isotope and Bayesian mixing models (SIMMs) resolve many biases of traditional methods. SIMMs can incorporate prior information (i.e. proportional diet composition) that may improve the precision in the estimated dietary composition. However few studies have assessed the performance of traditional methods and SIMMs with and without informative priors to study the predators' diets. Here we compare the diet compositions of the South American fur seal and sea lions obtained by scats analysis and by SIMMs-UP (uninformative priors) and assess whether informative priors (SIMMs-IP) from the scat analysis improved the estimated diet composition compared to SIMMs-UP. According to the SIMM-UP, while pelagic species dominated the fur seal's diet the sea lion's did not have a clear dominance of any prey. In contrast, SIMM-IP's diets compositions were dominated by the same preys as in scat analyses. When prior information influenced SIMMs' estimates, incorporating informative priors improved the precision in the estimated diet composition at the risk of inducing biases in the estimates. If preys isotopic data allow discriminating preys' contributions to diets, informative priors should lead to more precise but unbiased estimated diet composition. Just as estimates of diet composition obtained from traditional methods are critically interpreted because of their biases, care must be exercised when interpreting diet composition obtained by SIMMs-IP. The best approach to obtain a near-complete view of predators' diet composition should involve the simultaneous consideration of different sources of partial evidence (traditional methods, SIMM-UP and SIMM-IP) in the light of natural history of the predator species so as to reliably ascertain and weight the information yielded by each method.
Bias in Diet Determination: Incorporating Traditional Methods in Bayesian Mixing Models
Franco-Trecu, Valentina; Drago, Massimiliano; Riet-Sapriza, Federico G.; Parnell, Andrew; Frau, Rosina; Inchausti, Pablo
2013-01-01
There are not “universal methods” to determine diet composition of predators. Most traditional methods are biased because of their reliance on differential digestibility and the recovery of hard items. By relying on assimilated food, stable isotope and Bayesian mixing models (SIMMs) resolve many biases of traditional methods. SIMMs can incorporate prior information (i.e. proportional diet composition) that may improve the precision in the estimated dietary composition. However few studies have assessed the performance of traditional methods and SIMMs with and without informative priors to study the predators’ diets. Here we compare the diet compositions of the South American fur seal and sea lions obtained by scats analysis and by SIMMs-UP (uninformative priors) and assess whether informative priors (SIMMs-IP) from the scat analysis improved the estimated diet composition compared to SIMMs-UP. According to the SIMM-UP, while pelagic species dominated the fur seal’s diet the sea lion’s did not have a clear dominance of any prey. In contrast, SIMM-IP’s diets compositions were dominated by the same preys as in scat analyses. When prior information influenced SIMMs’ estimates, incorporating informative priors improved the precision in the estimated diet composition at the risk of inducing biases in the estimates. If preys isotopic data allow discriminating preys’ contributions to diets, informative priors should lead to more precise but unbiased estimated diet composition. Just as estimates of diet composition obtained from traditional methods are critically interpreted because of their biases, care must be exercised when interpreting diet composition obtained by SIMMs-IP. The best approach to obtain a near-complete view of predators’ diet composition should involve the simultaneous consideration of different sources of partial evidence (traditional methods, SIMM-UP and SIMM-IP) in the light of natural history of the predator species so as to reliably ascertain and weight the information yielded by each method. PMID:24224031
When Generating Answers Benefits Arithmetic Skill: The Importance of Prior Knowledge
ERIC Educational Resources Information Center
Rittle-Johnson, Bethany; Kmicikewycz, Alexander Oleksij
2008-01-01
People remember information better if they generate the information while studying rather than read the information. However, prior research has not investigated whether this generation effect extends to related but unstudied items and has not been conducted in classroom settings. We compared third graders' success on studied and unstudied…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-01
... techniques of other forms of information technology, e.g., permitting electronic submission of responses..., Equity Size, Prior History with HUD Loans and prior sales participation. By executing the Qualification...
Hamra, Ghassan; Richardson, David; Maclehose, Richard; Wing, Steve
2013-01-01
Informative priors can be a useful tool for epidemiologists to handle problems of sparse data in regression modeling. It is sometimes the case that an investigator is studying a population exposed to two agents, X and Y, where Y is the agent of primary interest. Previous research may suggest that the exposures have different effects on the health outcome of interest, one being more harmful than the other. Such information may be derived from epidemiologic analyses; however, in the case where such evidence is unavailable, knowledge can be drawn from toxicologic studies or other experimental research. Unfortunately, using toxicologic findings to develop informative priors in epidemiologic analyses requires strong assumptions, with no established method for its utilization. We present a method to help bridge the gap between animal and cellular studies and epidemiologic research by specification of an order-constrained prior. We illustrate this approach using an example from radiation epidemiology.
Integrating Informative Priors from Experimental Research with Bayesian Methods
Hamra, Ghassan; Richardson, David; MacLehose, Richard; Wing, Steve
2013-01-01
Informative priors can be a useful tool for epidemiologists to handle problems of sparse data in regression modeling. It is sometimes the case that an investigator is studying a population exposed to two agents, X and Y, where Y is the agent of primary interest. Previous research may suggest that the exposures have different effects on the health outcome of interest, one being more harmful than the other. Such information may be derived from epidemiologic analyses; however, in the case where such evidence is unavailable, knowledge can be drawn from toxicologic studies or other experimental research. Unfortunately, using toxicologic findings to develop informative priors in epidemiologic analyses requires strong assumptions, with no established method for its utilization. We present a method to help bridge the gap between animal and cellular studies and epidemiologic research by specification of an order-constrained prior. We illustrate this approach using an example from radiation epidemiology. PMID:23222512
Nowak, Michael D.; Smith, Andrew B.; Simpson, Carl; Zwickl, Derrick J.
2013-01-01
Molecular divergence time analyses often rely on the age of fossil lineages to calibrate node age estimates. Most divergence time analyses are now performed in a Bayesian framework, where fossil calibrations are incorporated as parametric prior probabilities on node ages. It is widely accepted that an ideal parameterization of such node age prior probabilities should be based on a comprehensive analysis of the fossil record of the clade of interest, but there is currently no generally applicable approach for calculating such informative priors. We provide here a simple and easily implemented method that employs fossil data to estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade, which can be used to fit an informative parametric prior probability distribution on a node age. Specifically, our method uses the extant diversity and the stratigraphic distribution of fossil lineages confidently assigned to a clade to fit a branching model of lineage diversification. Conditioning this on a simple model of fossil preservation, we estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade. The likelihood surface of missing history can then be translated into a parametric prior probability distribution on the age of the clade of interest. We show that the method performs well with simulated fossil distribution data, but that the likelihood surface of missing history can at times be too complex for the distribution-fitting algorithm employed by our software tool. An empirical example of the application of our method is performed to estimate echinoid node ages. A simulation-based sensitivity analysis using the echinoid data set shows that node age prior distributions estimated under poor preservation rates are significantly less informative than those estimated under high preservation rates. PMID:23755303
Kornmatitsuk, B; Dahl, E; Ropstad, E; Beckers, JF; Gustafsson, H; Kindahl, H
2004-01-01
The high incidence of stillbirth in Swedish Holstein heifers has increased continuously during the last 15 years to an average of 11% today. The pathological reasons behind the increased incidence of stillbirth are unknown. The present experiment was undertaken to investigate possible causes of stillbirth and to study possible physiological markers for predicting stillbirth. Twenty Swedish Holstein dairy heifers sired by bulls with breeding values for a high risk of stillbirth (n = 12) (experimental group) and a low risk of stillbirth (n = 8) (control group, group B) were selected based on information in the Swedish AI-data base. The experimental group consisted of 2 subgroups of heifers (groups A1 and A2) inseminated with 2 different bulls with 3.5% and 9% higher stillbirth rates than the average, and the control group consisted of heifers pregnant with 5 different bulls with 0%–6% lower stillbirth rates than the average. The bull used for group A1 had also calving difficulties due to large calves as compared to the bull in group A2 showing no calving difficulties. The heifers were supervised from 6–7 months of pregnancy up to birth, and the pregnancies and parturitions were compared between groups regarding hormonal levels, haematology, placental characteristics and calf viability. In group A1, 1 stillborn, 1 weak and 4 normal calves were recorded. In group A2, 2 stillborn and 4 normal calves were registered. All animals in the control group gave birth to a normal living calf without any assistance. The weak calf showed deviating profiles of body temperature, saturated oxygen and heart rates, compared with the normal living calves. No differences of the placentome thickness, measured in vivo by ultrasonography were seen between the groups. The number of leukocytes and differential cell counts in groups A1 and A2 followed the profiles found in the control group. In group A1, a slight decrease of oestrone sulphate (E1SO4) levels was found in the animal delivering a stillborn calf from the first 24-h blood sampling at 6 weeks to the second at 3 weeks prior to delivery, while the levels of E1SO4 at both periods in the animal delivering a weak calf followed the profile in animals delivering a normal living calf. During late pregnancy and at the time of parturition, the levels of E1SO4 and PAGs in animals delivering a stillborn or weak calf (from group A1) followed the normal profiles found in animals delivering a normal living calf. In group A2, low levels of E1SO4 and pregnancy associated glycoproteins (PAGs) over 24 h at both 3 and 6 weeks prior to parturition (<1.5 nmol/L) were recorded in animals delivering a stillborn calf. During late pregnancy and parturition, the levels of E1SO4 and PAGs were slightly lower during 30–50 days prior to delivery and increased with a lower magnitude at the time of parturition. In conclusion, our results indicate that the aetiology behind stillbirth varies depending on the AI-bulls used and is associated with dystocia or low viability of the calves. Deviating profiles of oestrone sulphate (E1SO4) and pregnancy associated glycoproteins (PAGs) in animals delivering a stillborn calf not caused by dystocia were observed, suggesting placental dysfunction as a possible factor. The finding suggests that the analyses of E1SO4 and PAGs could be used for monitoring foetal well-being in animals with a high risk of stillbirth at term. PMID:15535086
Brayanov, Jordan B.
2010-01-01
Which is heavier: a pound of lead or a pound of feathers? This classic trick question belies a simple but surprising truth: when lifted, the pound of lead feels heavier—a phenomenon known as the size–weight illusion. To estimate the weight of an object, our CNS combines two imperfect sources of information: a prior expectation, based on the object's appearance, and direct sensory information from lifting it. Bayes' theorem (or Bayes' law) defines the statistically optimal way to combine multiple information sources for maximally accurate estimation. Here we asked whether the mechanisms for combining these information sources produce statistically optimal weight estimates for both perceptions and actions. We first studied the ability of subjects to hold one hand steady when the other removed an object from it, under conditions in which sensory information about the object's weight sometimes conflicted with prior expectations based on its size. Since the ability to steady the supporting hand depends on the generation of a motor command that accounts for lift timing and object weight, hand motion can be used to gauge biases in weight estimation by the motor system. We found that these motor system weight estimates reflected the integration of prior expectations with real-time proprioceptive information in a Bayesian, statistically optimal fashion that discounted unexpected sensory information. This produces a motor size–weight illusion that consistently biases weight estimates toward prior expectations. In contrast, when subjects compared the weights of two objects, their perceptions defied Bayes' law, exaggerating the value of unexpected sensory information. This produces a perceptual size–weight illusion that biases weight perceptions away from prior expectations. We term this effect “anti-Bayesian” because the bias is opposite that seen in Bayesian integration. Our findings suggest that two fundamentally different strategies for the integration of prior expectations with sensory information coexist in the nervous system for weight estimation. PMID:20089821
Randomised prior feedback modulates neural signals of outcome monitoring.
Mushtaq, Faisal; Wilkie, Richard M; Mon-Williams, Mark A; Schaefer, Alexandre
2016-01-15
Substantial evidence indicates that decision outcomes are typically evaluated relative to expectations learned from relatively long sequences of previous outcomes. This mechanism is thought to play a key role in general learning and adaptation processes but relatively little is known about the determinants of outcome evaluation when the capacity to learn from series of prior events is difficult or impossible. To investigate this issue, we examined how the feedback-related negativity (FRN) is modulated by information briefly presented before outcome evaluation. The FRN is a brain potential time-locked to the delivery of decision feedback and it is widely thought to be sensitive to prior expectations. We conducted a multi-trial gambling task in which outcomes at each trial were fully randomised to minimise the capacity to learn from long sequences of prior outcomes. Event-related potentials for outcomes (Win/Loss) in the current trial (Outcomet) were separated according to the type of outcomes that occurred in the preceding two trials (Outcomet-1 and Outcomet-2). We found that FRN voltage was more positive during the processing of win feedback when it was preceded by wins at Outcomet-1 compared to win feedback preceded by losses at Outcomet-1. However, no influence of preceding outcomes was found on FRN activity relative to the processing of loss feedback. We also found no effects of Outcomet-2 on FRN amplitude relative to current feedback. Additional analyses indicated that this effect was largest for trials in which participants selected a decision different to the gamble chosen in the previous trial. These findings are inconsistent with models that solely relate the FRN to prediction error computation. Instead, our results suggest that if stable predictions about future events are weak or non-existent, then outcome processing can be determined by affective systems. More specifically, our results indicate that the FRN is likely to reflect the activity of positive affective systems in these contexts. Importantly, our findings indicate that a multifactorial explanation of the nature of the FRN is necessary and such an account must incorporate affective and motivational factors in outcome processing. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Randomised prior feedback modulates neural signals of outcome monitoring
Mushtaq, Faisal; Wilkie, Richard M.; Mon-Williams, Mark A.; Schaefer, Alexandre
2016-01-01
Substantial evidence indicates that decision outcomes are typically evaluated relative to expectations learned from relatively long sequences of previous outcomes. This mechanism is thought to play a key role in general learning and adaptation processes but relatively little is known about the determinants of outcome evaluation when the capacity to learn from series of prior events is difficult or impossible. To investigate this issue, we examined how the feedback-related negativity (FRN) is modulated by information briefly presented before outcome evaluation. The FRN is a brain potential time-locked to the delivery of decision feedback and it is widely thought to be sensitive to prior expectations. We conducted a multi-trial gambling task in which outcomes at each trial were fully randomised to minimise the capacity to learn from long sequences of prior outcomes. Event-related potentials for outcomes (Win/Loss) in the current trial (Outcomet) were separated according to the type of outcomes that occurred in the preceding two trials (Outcomet-1 and Outcomet-2). We found that FRN voltage was more positive during the processing of win feedback when it was preceded by wins at Outcomet-1 compared to win feedback preceded by losses at Outcomet-1. However, no influence of preceding outcomes was found on FRN activity relative to the processing of loss feedback. We also found no effects of Outcomet-2 on FRN amplitude relative to current feedback. Additional analyses indicated that this effect was largest for trials in which participants selected a decision different to the gamble chosen in the previous trial. These findings are inconsistent with models that solely relate the FRN to prediction error computation. Instead, our results suggest that if stable predictions about future events are weak or non-existent, then outcome processing can be determined by affective systems. More specifically, our results indicate that the FRN is likely to reflect the activity of positive affective systems in these contexts. Importantly, our findings indicate that a multifactorial explanation of the nature of the FRN is necessary and such an account must incorporate affective and motivational factors in outcome processing. PMID:26497268
How Judgments Change Following Comparison of Current and Prior Information
Albarracin, Dolores; Wallace, Harry M.; Hart, William; Brown, Rick D.
2013-01-01
Although much observed judgment change is superficial and occurs without considering prior information, other forms of change also occur. Comparison between prior and new information about an issue may trigger change by influencing either or both the perceived strength and direction of the new information. In four experiments, participants formed and reported initial judgments of a policy based on favorable written information about it. Later, these participants read a second passage containing strong favorable or unfavorable information on the policy. Compared to control conditions, subtle and direct prompts to compare the initial and new information led to more judgment change in the direction of a second passage perceived to be strong. Mediation analyses indicated that comparison yielded greater perceived strength of the second passage, which in turn correlated positively with judgment change. Moreover, self-reports of comparison mediated the judgment change resulting from comparison prompts. PMID:23599557
What quantum measurements measure
NASA Astrophysics Data System (ADS)
Griffiths, Robert B.
2017-09-01
A solution to the second measurement problem, determining what prior microscopic properties can be inferred from measurement outcomes ("pointer positions"), is worked out for projective and generalized (POVM) measurements, using consistent histories. The result supports the idea that equipment properly designed and calibrated reveals the properties it was designed to measure. Applications include Einstein's hemisphere and Wheeler's delayed choice paradoxes, and a method for analyzing weak measurements without recourse to weak values. Quantum measurements are noncontextual in the original sense employed by Bell and Mermin: if [A ,B ]=[A ,C ]=0 ,[B ,C ]≠0 , the outcome of an A measurement does not depend on whether it is measured with B or with C . An application to Bohm's model of the Einstein-Podolsky-Rosen situation suggests that a faulty understanding of quantum measurements is at the root of this paradox.
Johnson, Jr., James S.; Westmoreland, Clyde G.
1982-01-01
The present invention is directed to a sacrificial or competitive adsorbate for surfactants contained in chemical flooding emulsions for enhanced oil recovery operations. The adsorbate to be utilized in the method of the present invention is a caustic effluent from the bleach stage or the weak black liquor from the digesters and pulp washers of the kraft pulping process. This effluent or weak black liquor is injected into an oil-bearing subterranean earth formation prior to or concurrent with the chemical flood emulsion and is adsorbed on the active mineral surfaces of the formation matrix so as to effectively reduce adsorption of surfactant in the chemical flood. Alternatively, the effluent or liquor can be injected into the subterranean earth formation subsequent to a chemical flood to displace the surfactant from the mineral surfaces for the recovery thereof.
Dynamos driven by weak thermal convection and heterogeneous outer boundary heat flux
NASA Astrophysics Data System (ADS)
Sahoo, Swarandeep; Sreenivasan, Binod; Amit, Hagay
2016-01-01
We use numerical dynamo models with heterogeneous core-mantle boundary (CMB) heat flux to show that lower mantle lateral thermal variability may help support a dynamo under weak thermal convection. In our reference models with homogeneous CMB heat flux, convection is either marginally supercritical or absent, always below the threshold for dynamo onset. We find that lateral CMB heat flux variations organize the flow in the core into patterns that favour the growth of an early magnetic field. Heat flux patterns symmetric about the equator produce non-reversing magnetic fields, whereas anti-symmetric patterns produce polarity reversals. Our results may explain the existence of the geodynamo prior to inner core nucleation under a tight energy budget. Furthermore, in order to sustain a strong geomagnetic field, the lower mantle thermal distribution was likely dominantly symmetric about the equator.
Johnson, J.S. Jr.; Westmoreland, C.G.
1980-08-20
The present invention is directed to a sacrificial or competitive adsorbate for surfactants contained in chemical flooding emulsions for enhanced oil recovery operations. The adsorbate to be utilized in the method of the present invention is a caustic effluent from the bleach stage or the weak black liquor from the digesters and pulp washers of the kraft pulping process. This effluent or weak black liquor is injected into an oil-bearing subterranean earth formation prior to or concurrent with the chemical flood emulsion and is adsorbed on the active mineral surfaces of the formation matrix so as to effectively reduce adsorption of surfactant in the chemical flood. Alternatively, the effluent or liquor can be injected into the subterranean earth formation subsequent to a chemical flood to displace the surfactant from the mineral surfaces for the recovery thereof.
ERIC Educational Resources Information Center
Wetzels, Sandra A. J.; Kester, Liesbeth; van Merrienboer, Jeroen J. G.; Broers, Nick J.
2011-01-01
Background: Prior knowledge activation facilitates learning. Note taking during prior knowledge activation (i.e., note taking directed at retrieving information from memory) might facilitate the activation process by enabling learners to build an external representation of their prior knowledge. However, taking notes might be less effective in…
Unpacking Gender Differences in Students' Perceived Experiences in Introductory Physics
NASA Astrophysics Data System (ADS)
Kost, Lauren E.; Pollock, Steven J.; Finkelstein, Noah D.
2009-11-01
Prior research has shown, at our institution: 1) males outperform females on conceptual assessments (a gender gap), 2) the gender gap persists despite the use of research-based reforms, and 3) the gender gap is correlated with students' physics and mathematics background and prior attitudes and beliefs [Kost, et al. PRST-PER, 5, 010101]. Our follow-up work begins to explore how males and females experience the introductory course differently and how these differences relate to the gender gap. We gave a survey to students in the introductory course in which we investigated students' physics identity and self-efficacy. We find there are significant gender differences in each of these three areas, and further find that these measures are weakly correlated with student conceptual performance, and moderately correlated with course grade.
Tantowijoyo, W; Arguni, E; Johnson, P; Budiwati, N; Nurhayati, P I; Fitriana, I; Wardana, S; Ardiansyah, H; Turley, A P; Ryan, P; O'Neill, S L; Hoffmann, A A
2016-01-01
of mosquito vector populations, particularly through Wolbachia endosymbionts. The success of these strategies depends on understanding the dynamics of vector populations. In preparation for Wolbachia releases around Yogyakarta, we have studied Aedes populations in five hamlets. Adult monitoring with BioGent- Sentinel (BG-S) traps indicated that hamlet populations had different dynamics across the year; while there was an increase in Aedes aegypti (L.) and Aedes albopictus (Skuse) numbers in the wet season, species abundance remained relatively stable in some hamlets but changed markedly (>2 fold) in others. Local rainfall a month prior to monitoring partly predicted numbers of Ae. aegypti but not Ae. albopictus. Site differences in population size indicated by BG-S traps were also evident in ovitrap data. Egg or larval collections with ovitraps repeated at the same location suggested spatial autocorrelation (<250 m) in the areas of the hamlets where Ae. aegypti numbers were high. Overall, there was a weak negative association (r<0.43) between Ae. aegypti and Ae. albopictus numbers in ovitraps when averaged across collections. Ae. albopictus numbers in ovitraps and BG-S traps were positively correlated with vegetation around areas where traps were placed, while Ae. aegypti were negatively correlated with this feature. These data inform intervention strategies by defining periods when mosquito densities are high, highlighting the importance of local site characteristics on populations, and suggesting relatively weak interactions between Ae. aegypti and Ae. albopictus. They also indicate local areas within hamlets where consistently high mosquito densities may influence Wolbachia invasions and other interventions.
Vieira, Natassia M; Guo, Ling T; Estrela, Elicia; Kunkel, Louis M; Zatz, Mayana; Shelton, G Diane
2015-05-01
Animal models of dystrophin deficient muscular dystrophy, most notably canine X-linked muscular dystrophy, play an important role in developing new therapies for human Duchenne muscular dystrophy. Although the canine disease is a model of the human disease, the variable severity of clinical presentations in the canine may be problematic for pre-clinical trials, but also informative. Here we describe a family of Labrador Retrievers with three generations of male dogs having markedly increased serum creatine kinase activity, absence of membrane dystrophin, but with undetectable clinical signs of muscle weakness. Clinically normal young male Labrador Retriever puppies were evaluated prior to surgical neuter by screening laboratory blood work, including serum creatine kinase activity. Serum creatine kinase activities were markedly increased in the absence of clinical signs of muscle weakness. Evaluation of muscle biopsies confirmed a dystrophic phenotype with both degeneration and regeneration. Further evaluations by immunofluorescence and western blot analysis confirmed the absence of muscle dystrophin. Although dystrophin was not identified in the muscles, we did not find any detectable deletions or duplications in the dystrophin gene. Sequencing is now ongoing to search for point mutations. Our findings in this family of Labrador Retriever dogs lend support to the hypothesis that, in exceptional situations, muscle with no dystrophin may be functional. Unlocking the secrets that protect these dogs from a severe clinical myopathy is a great challenge which may have important implications for future treatment of human muscular dystrophies. Copyright © 2015 Elsevier B.V. All rights reserved.
Hu, Jing; Zheng, Yi; Gao, Jianbo
2013-01-01
Understanding the causal relation between neural inputs and movements is very important for the success of brain-machine interfaces (BMIs). In this study, we analyze 104 neurons’ firings using statistical, information theoretic, and fractal analysis. The latter include Fano factor analysis, multifractal adaptive fractal analysis (MF-AFA), and wavelet multifractal analysis. We find neuronal firings are highly non-stationary, and Fano factor analysis always indicates long-range correlations in neuronal firings, irrespective of whether those firings are correlated with movement trajectory or not, and thus does not reveal any actual correlations between neural inputs and movements. On the other hand, MF-AFA and wavelet multifractal analysis clearly indicate that when neuronal firings are not well correlated with movement trajectory, they do not have or only have weak temporal correlations. When neuronal firings are well correlated with movements, they are characterized by very strong temporal correlations, up to a time scale comparable to the average time between two successive reaching tasks. This suggests that neurons well correlated with hand trajectory experienced a “re-setting” effect at the start of each reaching task, in the sense that within the movement correlated neurons the spike trains’ long-range dependences persisted about the length of time the monkey used to switch between task executions. A new task execution re-sets their activity, making them only weakly correlated with their prior activities on longer time scales. We further discuss the significance of the coalition of those important neurons in executing cortical control of prostheses. PMID:24130549
NASA Astrophysics Data System (ADS)
Hee, S.; Vázquez, J. A.; Handley, W. J.; Hobson, M. P.; Lasenby, A. N.
2017-04-01
Data-driven model-independent reconstructions of the dark energy equation of state w(z) are presented using Planck 2015 era cosmic microwave background, baryonic acoustic oscillations (BAO), Type Ia supernova (SNIa) and Lyman α (Lyα) data. These reconstructions identify the w(z) behaviour supported by the data and show a bifurcation of the equation of state posterior in the range 1.5 < z < 3. Although the concordance Λ cold dark matter (ΛCDM) model is consistent with the data at all redshifts in one of the bifurcated spaces, in the other, a supernegative equation of state (also known as 'phantom dark energy') is identified within the 1.5σ confidence intervals of the posterior distribution. To identify the power of different data sets in constraining the dark energy equation of state, we use a novel formulation of the Kullback-Leibler divergence. This formalism quantifies the information the data add when moving from priors to posteriors for each possible data set combination. The SNIa and BAO data sets are shown to provide much more constraining power in comparison to the Lyα data sets. Further, SNIa and BAO constrain most strongly around redshift range 0.1-0.5, whilst the Lyα data constrain weakly over a broader range. We do not attribute the supernegative favouring to any particular data set, and note that the ΛCDM model was favoured at more than 2 log-units in Bayes factors over all the models tested despite the weakly preferred w(z) structure in the data.
Ankle-foot orthosis bending axis influences running mechanics.
Russell Esposito, Elizabeth; Ranz, Ellyn C; Schmidtbauer, Kelly A; Neptune, Richard R; Wilken, Jason M
2017-07-01
Passive-dynamic ankle-foot orthoses (AFOs) are commonly prescribed to improve locomotion for people with lower limb musculoskeletal weakness. The clinical prescription and design process are typically qualitative and based on observational assessment and experience. Prior work examining the effect of AFO design characteristics generally excludes higher impact activities such as running, providing clinicians and researchers limited information to guide the development of objective prescription guidelines. The proximal location of the bending axis may directly influence energy storage and return and resulting running mechanics. The purpose of this study was to determine if the location of an AFO's bending axis influences running mechanics. Marker and force data were recorded as 12 participants with lower extremity weakness ran overground while wearing a passive-dynamic AFO with posterior struts manufactured with central (middle) and off-centered (high and low) bending axes. Lower extremity joint angles, moments, powers, and ground reaction forces were calculated and compared between limbs and across bending axis conditions. Bending axis produced relatively small but significant changes. Ankle range of motion increased as the bending axis shifted distally (p<0.003). Peak ankle power absorption was greater in the low axis than high (p=0.013), and peak power generation was greater in the low condition than middle or high conditions (p<0.009). Half of the participants preferred the middle bending axis, four preferred low and two preferred high. Overall, if greater ankle range of motion is tolerated, a low bending axis provides power and propulsive benefits during running, although individual preference and physical ability should also be considered. Published by Elsevier B.V.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
NASA Astrophysics Data System (ADS)
Hall, Alex; Taylor, Andy
2017-06-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.
Cosmic microwave background snapshots: pre-WMAP and post-WMAP.
Bond, J Richard; Contaldi, Carlo; Pogosyan, Dmitry
2003-11-15
We highlight the remarkable evolution in the cosmic microwave background (CMB) power spectrum C(l) as a function of multipole l over the past few years, and in the cosmological parameters for minimal inflation models derived from it: from anisotropy results before 2000; in 2000 and 2001 from Boomerang, Maxima and the Degree Angular Scale Interferometer (DASI), extending l to approximately 1000; and in 2002 from the Cosmic Background Imager (CBI), Very Small Array (VSA), ARCHEOPS and Arcminute Cosmology Bolometer Array Receiver (ACBAR), extending l to approximately 3000, with more from Boomerang and DASI as well. Pre-WMAP (pre-Wilkinson Microwave Anisotropy Probe) optimal band powers are in good agreement with each other and with the exquisite one-year WMAP results, unveiled in February 2003, which now dominate the l less, similar 600 bands. These CMB experiments significantly increased the case for accelerated expansion in the early Universe (the inflationary paradigm) and at the current epoch (dark energy dominance) when they were combined with "prior" probabilities on the parameters. The minimal inflation parameter set, [omega(b), omega(cdm), Omega(tot), Omega(Lambda), n(s), tau(C), sigma(8)], is applied in the same way to the evolving data. C(l) database and Monte Carlo Markov Chain (MCMC) methods are shown to give similar values, which are highly stable over time and for different prior choices, with the increasing precision best characterized by decreasing errors on uncorrelated "parameter eigenmodes". Priors applied range from weak ones to stronger constraints from the expansion rate (HST-h), from cosmic acceleration from supernovae (SN1) and from galaxy clustering, gravitational lensing and local cluster abundance (LSS). After marginalizing over the other cosmic and experimental variables for the weak + LSS prior, the pre-WMAP data of January 2003 compared with the post-WMAP data of March 2003 give Omega(tot) = 1.03(-0.04)(+0.05) compared with 1.02(-0.03)(+0.04), consistent with (non-Baroque) inflation theory. Adding the flat Omega(tot) = 1 prior, we find a nearly scale-invariant spectrum, n(s) = 0.95(-0.04)(+0.07) compared with 0.97(-0.02)(+0.02). The evidence for a logarithmic variation of the spectral tilt is less than or approximately 2sigma. The densities are for: baryons, omega(b) identical with Omega(b)h(2) = 0.0217(-0.002)(+0.002) (compared with 0.0228(-0.001)(+0.001)), near the Big Bang nucleosynthesis (BBN) estimate of 0.0214 +/- 0.002; CDM, omega(cdm) = Omega(cdm)h(2) = 0.126(-0.012)(+0.012) (compared with 0.121(-0.010)(+0.010)); the substantial dark (unclustered) energy, Omega(Lambda) approximately 0.66(-0.09)(+0.07) (compared with 0.70(-0.05)(+0.05)). The dark energy pressure-to-density ratio w(Q) is not well constrained by our weak + LSS prior, but adding SN1 gives w(Q) less than or approximately -0.7 for January 2003 and March 2003, consistent with the w(Q) = -1 cosmological constant case. We find sigma(8) = 0.89(-0.07)(+0.06) (compared with 0.86(-0.04)(+0.04)), implying a sizable Sunyaev-Zel'dovich (SZ) effect from clusters and groups; the high-l power found in the January 2003 data suggest sigma(8) approximately 0.94(-0.16)(+0.08) is needed to be SZ-compatible.
ERIC Educational Resources Information Center
Dickey, Patsy A.
1980-01-01
Forty female students were used to compare the incremental difference in heart rate of shorthand writers when they were informed and not informed of shorthand speeds prior to dictation. It was concluded that students' performances were enhanced by receiving instructions as to speed of dictation prior to the take. (Author/CT)
Evaluation of two methods for using MR information in PET reconstruction
NASA Astrophysics Data System (ADS)
Caldeira, L.; Scheins, J.; Almeida, P.; Herzog, H.
2013-02-01
Using magnetic resonance (MR) information in maximum a posteriori (MAP) algorithms for positron emission tomography (PET) image reconstruction has been investigated in the last years. Recently, three methods to introduce this information have been evaluated and the Bowsher prior was considered the best. Its main advantage is that it does not require image segmentation. Another method that has been widely used for incorporating MR information is using boundaries obtained by segmentation. This method has also shown improvements in image quality. In this paper, two methods for incorporating MR information in PET reconstruction are compared. After a Bayes parameter optimization, the reconstructed images were compared using the mean squared error (MSE) and the coefficient of variation (CV). MSE values are 3% lower in Bowsher than using boundaries. CV values are 10% lower in Bowsher than using boundaries. Both methods performed better than using no prior, that is, maximum likelihood expectation maximization (MLEM) or MAP without anatomic information in terms of MSE and CV. Concluding, incorporating MR information using the Bowsher prior gives better results in terms of MSE and CV than boundaries. MAP algorithms showed again to be effective in noise reduction and convergence, specially when MR information is incorporated. The robustness of the priors in respect to noise and inhomogeneities in the MR image has however still to be performed.
Khan, Rihan; Krupinski, Elizabeth; Graham, J Allen; Benodin, Les; Lewis, Petra
2012-06-01
Whether first-year radiology residents are ready to start call after 6 or 12 months has been a subject of much debate. The purpose of this study was to establish an assessment that would evaluate the call readiness of first-year radiology residents and identify any individual areas of weakness using a comprehensive computerized format. Secondarily, we evaluated for any significant differences in performance before and after the change in precall training requirement from 6 to 12 months. A list of >140 potential emergency radiology cases was given to first-year radiology residents at the beginning of the academic year. Over 4 years, three separate versions of a computerized examination were constructed using hyperlinked PowerPoint presentations and given to both first-year and second-year residents. No resident took the same version of the exam twice. Exam score and number of cases failed were assessed. Individual areas of weakness were identified and remediated with the residents. Statistical analysis was used to evaluate exam score and the number of cases failed, considering resident year and the three versions of the exam. Over 4 years, 17 of 19 (89%) first-year radiology residents passed the exam on first attempt. The two who failed were remediated and passed a different version of the exam 6 weeks later. Using the oral board scoring system, first-year radiology residents scored an average of 70.7 with 13 cases failed, compared to 71.1 with eight cases failed for second-year residents who scored statistically significantly higher. No significant difference was found in first-year radiology resident scoring before and after the 12-month training requirement prior to call. An emergency radiology examination was established to aid in the assessment of first-year radiology residents' competency prior to starting call, which has become a permanent part of the first-year curriculum. Over 4 years, all first-year residents were ultimately judged ready to start call. Of the variables assessed, only resident year showed a significant difference in scoring parameters. In particular, length of training prior to taking call showed no significant difference. Areas of weakness were identified for further study. Copyright © 2012 AUR. Published by Elsevier Inc. All rights reserved.
What are they up to? The role of sensory evidence and prior knowledge in action understanding.
Chambon, Valerian; Domenech, Philippe; Pacherie, Elisabeth; Koechlin, Etienne; Baraduc, Pierre; Farrer, Chlöé
2011-02-18
Explaining or predicting the behaviour of our conspecifics requires the ability to infer the intentions that motivate it. Such inferences are assumed to rely on two types of information: (1) the sensory information conveyed by movement kinematics and (2) the observer's prior expectations--acquired from past experience or derived from prior knowledge. However, the respective contribution of these two sources of information is still controversial. This controversy stems in part from the fact that "intention" is an umbrella term that may embrace various sub-types each being assigned different scopes and targets. We hypothesized that variations in the scope and target of intentions may account for variations in the contribution of visual kinematics and prior knowledge to the intention inference process. To test this hypothesis, we conducted four behavioural experiments in which participants were instructed to identify different types of intention: basic intentions (i.e. simple goal of a motor act), superordinate intentions (i.e. general goal of a sequence of motor acts), or social intentions (i.e. intentions accomplished in a context of reciprocal interaction). For each of the above-mentioned intentions, we varied (1) the amount of visual information available from the action scene and (2) participant's prior expectations concerning the intention that was more likely to be accomplished. First, we showed that intentional judgments depend on a consistent interaction between visual information and participant's prior expectations. Moreover, we demonstrated that this interaction varied according to the type of intention to be inferred, with participant's priors rather than perceptual evidence exerting a greater effect on the inference of social and superordinate intentions. The results are discussed by appealing to the specific properties of each type of intention considered and further interpreted in the light of a hierarchical model of action representation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The Department of Energy and its contractors store and process massive quantities of sensitive information to accomplish national security, energy, science, and environmental missions. Sensitive unclassified data, such as personally identifiable information (PII), official use only, and unclassified controlled nuclear information require special handling and protection to prevent misuse of the information for inappropriate purposes. Industry experts have reported that more than 203 million personal privacy records have been lost or stolen over the past three years, including information maintained by corporations, educational institutions, and Federal agencies. The loss of personal and other sensitive information can result in substantial financialmore » harm, embarrassment, and inconvenience to individuals and organizations. Therefore, strong protective measures, including data encryption, help protect against the unauthorized disclosure of sensitive information. Prior reports involving the loss of sensitive information have highlighted weaknesses in the Department's ability to protect sensitive data. Our report on Security Over Personally Identifiable Information (DOE/IG-0771, July 2007) disclosed that the Department had not fully implemented all measures recommended by the Office of Management and Budget (OMB) and required by the National Institute of Standards and Technology (NIST) to protect PII, including failures to identify and encrypt PII maintained on information systems. Similarly, the Government Accountability Office recently reported that the Department had not yet installed encryption technology to protect sensitive data on the vast majority of laptop computers and handheld devices. Because of the potential for harm, we initiated this audit to determine whether the Department and its contractors adequately safeguarded sensitive electronic information. The Department had taken a number of steps to improve protection of PII. Our review, however, identified opportunities to strengthen the protection of all types of sensitive unclassified electronic information and reduce the risk that such data could fall into the hands of individuals with malicious intent. In particular, for the seven sites we reviewed: (1) Four sites had either not ensured that sensitive information maintained on mobile devices was encrypted. Or, they had improperly permitted sensitive unclassified information to be transmitted unencrypted through email or to offsite backup storage facilities; (2) One site had not ensured that laptops taken on foreign travel, including travel to sensitive countries, were protected against security threats; and, (3) Although required by the OMB since 2003, we learned that programs and sites were still working to complete Privacy Impact Assessments - analyses designed to examine the risks and ramifications of using information systems to collect, maintain, and disseminate personal information. Our testing revealed that the weaknesses identified were attributable, at least in part, to Headquarters programs and field sites that had not implemented existing policies and procedures requiring protection of sensitive electronic information. In addition, a lack of performance monitoring contributed to the inability of the Department and the National Nuclear Security Administration (NNSA) to ensure that measures were in place to fully protect sensitive information. As demonstrated by previous computer intrusion-related data losses throughout the Department, without improvements, the risk or vulnerability for future losses remains unacceptably high. In conducting this audit, we recognized that data encryption and related techniques do not provide absolute assurance that sensitive data is fully protected. For example, encryption will not necessarily protect data in circumstances where organizational access controls are weak or are circumvented through phishing or other malicious techniques. However, as noted by NIST, when used appropriately, encryption is an effective tool that can, as part of an overall risk-management strategy, enhance security over critical personal and other sensitive information. The audit disclosed that Sandia National Laboratories had instituted a comprehensive program to protect laptops taken on foreign travel. In addition, the Department issued policy after our field work was completed that should standardize the Privacy Impact Assessment process, and, in so doing, provide increased accountability. While these actions are positive steps, additional effort is needed to help ensure that the privacy of individuals is adequately protected and that sensitive operational data is not compromised. To that end, our report contains several recommendations to implement a risk-based protection scheme for the protection of sensitive electronic information.« less
Characteristics of patients contacting a center for undiagnosed and rare diseases.
Mueller, Tobias; Jerrentrup, Andreas; Bauer, Max Jakob; Fritsch, Hans Walter; Schaefer, Juergen Rolf
2016-06-21
Little is known about the characteristics of patients seeking help from dedicated centers for undiagnosed and rare diseases. However, information about their demographics, symptoms, prior diagnoses and medical specialty is crucial to optimize these centers' processes and infrastructure. Using a questionnaire, structured information from 522 adult patients contacting a center for undiagnosed and rare diseases was obtained. The information included basic sociodemographic data (age, gender, insurance status), previous hospital admissions, primary symptoms of complaint and previously determined diagnosis. The majority of patients completing the questionnaire were female, 300 (57 %) vs. 222 men (43 %). The median age was 52 years (range 18-92). More than half, 309 (59 %), of our patients had never been admitted to a university hospital. Common diagnoses included other soft tissue disorders, not classified elsewhere (ICD M79, n = 63, 15.3 %), somatoform disorders (ICD F45, n = 51, 12.3 %) and other polyneuropathies (ICD G62, n=36, 8.7 %). The most frequent symptoms were general weakness (n = 180, 36.6 %) followed by arthralgia (n = 124, 25.2 %) and abdominal discomfort (n = 113, 23.0 %). The majority of patients had either internal medicine (81.3 %) and/or neurologic (37.6 %) health problems. Pain-associated diagnoses and the typical "unexplained" medical conditions (chronic fatigue syndrome, fibromyalgia, irritable bowel syndrome) are frequent among people contacting a center dedicated to undiagnosed diseases. The chief symptoms are mostly unspecific. An interdisciplinary organizational approach involving mainly internal medicine, neurology and psychiatry/psychosomatic care is needed.
Michielsen, Koen; Nuyts, Johan; Cockmartin, Lesley; Marshall, Nicholas; Bosmans, Hilde
2016-12-01
In this work, the authors design and validate a model observer that can detect groups of microcalcifications in a four-alternative forced choice experiment and use it to optimize a smoothing prior for detectability of microcalcifications. A channelized Hotelling observer (CHO) with eight Laguerre-Gauss channels was designed to detect groups of five microcalcifications in a background of acrylic spheres by adding the CHO log-likelihood ratios calculated at the expected locations of the five calcifications. This model observer is then applied to optimize the detectability of the microcalcifications as a function of the smoothing prior. The authors examine the quadratic and total variation (TV) priors, and a combination of both. A selection of these reconstructions was then evaluated by human observers to validate the correct working of the model observer. The authors found a clear maximum for the detectability of microcalcification when using the total variation prior with weight β TV = 35. Detectability only varied over a small range for the quadratic and combined quadratic-TV priors when weight β Q of the quadratic prior was changed by two orders of magnitude. Spearman correlation with human observers was good except for the highest value of β for the quadratic and TV priors. Excluding those, the authors found ρ = 0.93 when comparing detection fractions, and ρ = 0.86 for the fitted detection threshold diameter. The authors successfully designed a model observer that was able to predict human performance over a large range of settings of the smoothing prior, except for the highest values of β which were outside the useful range for good image quality. Since detectability only depends weakly on the strength of the combined prior, it is not possible to pick an optimal smoothness based only on this criterion. On the other hand, such choice can now be made based on other criteria without worrying about calcification detectability.
The power prior: theory and applications.
Ibrahim, Joseph G; Chen, Ming-Hui; Gwon, Yeongjin; Chen, Fang
2015-12-10
The power prior has been widely used in many applications covering a large number of disciplines. The power prior is intended to be an informative prior constructed from historical data. It has been used in clinical trials, genetics, health care, psychology, environmental health, engineering, economics, and business. It has also been applied for a wide variety of models and settings, both in the experimental design and analysis contexts. In this review article, we give an A-to-Z exposition of the power prior and its applications to date. We review its theoretical properties, variations in its formulation, statistical contexts for which it has been used, applications, and its advantages over other informative priors. We review models for which it has been used, including generalized linear models, survival models, and random effects models. Statistical areas where the power prior has been used include model selection, experimental design, hierarchical modeling, and conjugate priors. Frequentist properties of power priors in posterior inference are established, and a simulation study is conducted to further examine the empirical performance of the posterior estimates with power priors. Real data analyses are given illustrating the power prior as well as the use of the power prior in the Bayesian design of clinical trials. Copyright © 2015 John Wiley & Sons, Ltd.
Kurrant, Douglas; Fear, Elise; Baran, Anastasia; LoVetri, Joe
2017-12-01
The authors have developed a method to combine a patient-specific map of tissue structure and average dielectric properties with microwave tomography. The patient-specific map is acquired with radar-based techniques and serves as prior information for microwave tomography. The impact that the degree of structural detail included in this prior information has on image quality was reported in a previous investigation. The aim of the present study is to extend this previous work by identifying and quantifying the impact that errors in the prior information have on image quality, including the reconstruction of internal structures and lesions embedded in fibroglandular tissue. This study also extends the work of others reported in literature by emulating a clinical setting with a set of experiments that incorporate heterogeneity into both the breast interior and glandular region, as well as prior information related to both fat and glandular structures. Patient-specific structural information is acquired using radar-based methods that form a regional map of the breast. Errors are introduced to create a discrepancy in the geometry and electrical properties between the regional map and the model used to generate the data. This permits the impact that errors in the prior information have on image quality to be evaluated. Image quality is quantitatively assessed by measuring the ability of the algorithm to reconstruct both internal structures and lesions embedded in fibroglandular tissue. The study is conducted using both 2D and 3D numerical breast models constructed from MRI scans. The reconstruction results demonstrate robustness of the method relative to errors in the dielectric properties of the background regional map, and to misalignment errors. These errors do not significantly influence the reconstruction accuracy of the underlying structures, or the ability of the algorithm to reconstruct malignant tissue. Although misalignment errors do not significantly impact the quality of the reconstructed fat and glandular structures for the 3D scenarios, the dielectric properties are reconstructed less accurately within the glandular structure for these cases relative to the 2D cases. However, general agreement between the 2D and 3D results was found. A key contribution of this paper is the detailed analysis of the impact of prior information errors on the reconstruction accuracy and ability to detect tumors. The results support the utility of acquiring patient-specific information with radar-based techniques and incorporating this information into MWT. The method is robust to errors in the dielectric properties of the background regional map, and to misalignment errors. Completion of this analysis is an important step toward developing the method into a practical diagnostic tool. © 2017 American Association of Physicists in Medicine.
2005 Tri-Service Infrastructure Systems Conference and Exhibition. Volume 7, Tracks 7 and 8
2005-08-04
dense soils have the potential to wash-out and erode with fluid rotary methods and over excavation and hydraulic fracturing can result. Short...circuiting is possible outside of the temporary or outer casing or through weak soils to grade. Hydraulic fracturing may take place due to soil properties...prevented the potential for hydraulic fracturing of the sensitive dam prior to grouting. Sonic drilling was selected from a ran of proposed
Orlando, Ron
2010-01-01
The ability to quantitatively determine changes is an essential component of comparative glycomics. Multiple strategies are available by which this can be accomplished. These include label-free approaches and strategies where an isotopic label is incorporated into the glycans prior to analysis. The focus of this chapter is to describe each of these approaches while providing insight into their strengths and weaknesses, so that glycomic investigators can make an educated choice of the strategy that is best suited for their particular application.
Protecting Quantum Correlation from Correlated Amplitude Damping Channel
NASA Astrophysics Data System (ADS)
Huang, Zhiming; Zhang, Cai
2017-08-01
In this work, we investigate the dynamics of quantum correlation measured by measurement-induced nonlocality (MIN) and local quantum uncertainty (LQU) in correlated amplitude damping (CAD) channel. We find that the memory parameter brings different influences on MIN and LQU. In addition, we propose a scheme to protect quantum correlation by executing prior weak measurement (WM) and post-measurement reversal (MR). However, better protection of quantum correlation by the scheme implies a lower success probability (SP).
Psychopathic traits in nursing and criminal justice majors: a pilot study.
Clow, Kimberley A; Scott, Hannah S
2007-04-01
Prior findings suggest presence of psychopathic personality traits may be prevalent outside of the criminal sphere, such as in the business world. It is possible that particular work environments are attractive to individuals with higher psychopathic personality traits. To test this hypothesis, the current study investigated whether psychopathic personality scores could predict students' choices between two university majors, criminal justice or nursing (N= 174; 53 men, 121 women). Nursing education espouses nurturance and care, while criminal justice education teaches students informal and formal social control. Given these two educational mandates, it was predicted that students who scored higher on a scale of psychopathy would tend to enter criminal justice rather than nursing. Using logistic regression, results showed students with higher overall scores on the Psychopathic Personality Inventory, specifically higher scores on the subscale Machiavellian Egocentricity, were more likely to have chosen to major in criminal justice than nursing. Effects were generally weak but significant, accounting for between 5% to 25% of the variance in choice of major. Furthermore, this finding was not due to sex differences.
Separation in Logistic Regression: Causes, Consequences, and Control.
Mansournia, Mohammad Ali; Geroldinger, Angelika; Greenland, Sander; Heinze, Georg
2018-04-01
Separation is encountered in regression models with a discrete outcome (such as logistic regression) where the covariates perfectly predict the outcome. It is most frequent under the same conditions that lead to small-sample and sparse-data bias, such as presence of a rare outcome, rare exposures, highly correlated covariates, or covariates with strong effects. In theory, separation will produce infinite estimates for some coefficients. In practice, however, separation may be unnoticed or mishandled because of software limits in recognizing and handling the problem and in notifying the user. We discuss causes of separation in logistic regression and describe how common software packages deal with it. We then describe methods that remove separation, focusing on the same penalized-likelihood techniques used to address more general sparse-data problems. These methods improve accuracy, avoid software problems, and allow interpretation as Bayesian analyses with weakly informative priors. We discuss likelihood penalties, including some that can be implemented easily with any software package, and their relative advantages and disadvantages. We provide an illustration of ideas and methods using data from a case-control study of contraceptive practices and urinary tract infection.
Use Hierarchical Storage and Analysis to Exploit Intrinsic Parallelism
NASA Astrophysics Data System (ADS)
Zender, C. S.; Wang, W.; Vicente, P.
2013-12-01
Big Data is an ugly name for the scientific opportunities and challenges created by the growing wealth of geoscience data. How to weave large, disparate datasets together to best reveal their underlying properties, to exploit their strengths and minimize their weaknesses, to continually aggregate more information than the world knew yesterday and less than we will learn tomorrow? Data analytics techniques (statistics, data mining, machine learning, etc.) can accelerate pattern recognition and discovery. However, often researchers must, prior to analysis, organize multiple related datasets into a coherent framework. Hierarchical organization permits entire dataset to be stored in nested groups that reflect their intrinsic relationships and similarities. Hierarchical data can be simpler and faster to analyze by coding operators to automatically parallelize processes over isomorphic storage units, i.e., groups. The newest generation of netCDF Operators (NCO) embody this hierarchical approach, while still supporting traditional analysis approaches. We will use NCO to demonstrate the trade-offs involved in processing a prototypical Big Data application (analysis of CMIP5 datasets) using hierarchical and traditional analysis approaches.
Conlon, Anna S C; Taylor, Jeremy M G; Elliott, Michael R
2014-04-01
In clinical trials, a surrogate outcome variable (S) can be measured before the outcome of interest (T) and may provide early information regarding the treatment (Z) effect on T. Using the principal surrogacy framework introduced by Frangakis and Rubin (2002. Principal stratification in causal inference. Biometrics 58, 21-29), we consider an approach that has a causal interpretation and develop a Bayesian estimation strategy for surrogate validation when the joint distribution of potential surrogate and outcome measures is multivariate normal. From the joint conditional distribution of the potential outcomes of T, given the potential outcomes of S, we propose surrogacy validation measures from this model. As the model is not fully identifiable from the data, we propose some reasonable prior distributions and assumptions that can be placed on weakly identified parameters to aid in estimation. We explore the relationship between our surrogacy measures and the surrogacy measures proposed by Prentice (1989. Surrogate endpoints in clinical trials: definition and operational criteria. Statistics in Medicine 8, 431-440). The method is applied to data from a macular degeneration study and an ovarian cancer study.
Conlon, Anna S. C.; Taylor, Jeremy M. G.; Elliott, Michael R.
2014-01-01
In clinical trials, a surrogate outcome variable (S) can be measured before the outcome of interest (T) and may provide early information regarding the treatment (Z) effect on T. Using the principal surrogacy framework introduced by Frangakis and Rubin (2002. Principal stratification in causal inference. Biometrics 58, 21–29), we consider an approach that has a causal interpretation and develop a Bayesian estimation strategy for surrogate validation when the joint distribution of potential surrogate and outcome measures is multivariate normal. From the joint conditional distribution of the potential outcomes of T, given the potential outcomes of S, we propose surrogacy validation measures from this model. As the model is not fully identifiable from the data, we propose some reasonable prior distributions and assumptions that can be placed on weakly identified parameters to aid in estimation. We explore the relationship between our surrogacy measures and the surrogacy measures proposed by Prentice (1989. Surrogate endpoints in clinical trials: definition and operational criteria. Statistics in Medicine 8, 431–440). The method is applied to data from a macular degeneration study and an ovarian cancer study. PMID:24285772
Serum and Plasma Cholinesterase Activity in the Cape Griffon Vulture (Gyps coprotheres).
Naidoo, Vinny; Wolter, Kerri
2016-04-28
Vulture (Accipitridae) poisonings are a concern in South Africa, with hundreds of birds dying annually. Although some of these poisonings are accidental, there has been an increase in the number of intentional baiting of poached rhinoceros (Rhinocerotidae) and elephant (Elephantidae) carcasses to kill vultures that alert officials to poaching sites by circling overhead. The primary chemicals implicated are the organophosphorous and carbamate compounds. Although most poisoning events can be identified by dead vultures surrounding the scavenged carcass, weak birds are occasionally found and brought to rehabilitation centers for treatment. The treating veterinarian needs to make an informed decision on the cause of illness or poisoning prior to treatment. We established the reference interval for serum and plasma cholinesterase activity in the Cape Griffon Vulture ( Gyps coprotheres ) as 591.58-1,528.26 U/L, providing a clinical assay for determining potential exposure to cholinesterase-depressing pesticides. Both manual and automated samplers were used with the butyrylthiocholine method. Species reference intervals for both serum and plasma cholinesterase showed good correlation and manual and automated measurements yielded similar results.
Corruption and Coercion: University Autonomy versus State Control
ERIC Educational Resources Information Center
Osipian, Ararat L.
2008-01-01
A substantial body of literature considers excessive corruption an indicator of a weak state. However, in nondemocratic societies, corruption--whether informally approved, imposed, or regulated by public authorities--is often an indicator of a vertical power rather than an indicator of a weak state. This article explores the interrelations between…
Determination of void volume in normal phase liquid chromatography.
Jiang, Ping; Wu, Di; Lucy, Charles A
2014-01-10
Void volume is an important fundamental parameter in chromatography. Little prior discussion has focused on the determination of void volume in normal phase liquid chromatography (NPLC). Various methods to estimate the total void volume are compared: pycnometry; minor disturbance method based on injection of weak solvent; tracer pulse method; hold-up volume based on unretained compounds; and accessible volume based on Martin's rule and its descendants. These are applied to NPLC on silica, RingSep and DNAP columns. Pycnometry provides a theoretically maximum value for the total void volume and should be performed at least once for each new column. However, pycnometry does not reflect the volume of adsorbed strong solvent on the stationary phase, and so only yields an accurate void volume for weaker mobile phase conditions. 1,3,5-Tri-t-butyl benzene (TTBB) results in hold-up volumes that are convenient measures of the void volume for all eluent conditions on charge-transfer columns (RingSep and DNAP), but is weakly retained under weak eluent conditions on silica. Injection of the weak mobile phase component (hexane) may be used to determine void volume, but care must be exercised to select the appropriate disturbance feature. Accessible volumes, that are determined using a homologous series, are always biased low, and are not recommended as a measure of the void volume. Copyright © 2013 Elsevier B.V. All rights reserved.
Use of Very Weak Radiation Sources to Determine Aircraft Runway Position
NASA Technical Reports Server (NTRS)
Drinkwater, Fred J., III; Kibort, Bernard R.
1965-01-01
Various methods of providing runway information in the cockpit during the take-off and landing roll have been proposed. The most reliable method has been to use runway distance markers when visible. Flight tests were used to evaluate the feasibility of using weak radio-active sources to trigger a runway distance counter in the cockpit. The results of these tests indicate that a weak radioactive source would provide a reliable signal by which this indicator could be operated.
Informative priors on fetal fraction increase power of the noninvasive prenatal screen.
Xu, Hanli; Wang, Shaowei; Ma, Lin-Lin; Huang, Shuai; Liang, Lin; Liu, Qian; Liu, Yang-Yang; Liu, Ke-Di; Tan, Ze-Min; Ban, Hao; Guan, Yongtao; Lu, Zuhong
2017-11-09
PurposeNoninvasive prenatal screening (NIPS) sequences a mixture of the maternal and fetal cell-free DNA. Fetal trisomy can be detected by examining chromosomal dosages estimated from sequencing reads. The traditional method uses the Z-test, which compares a subject against a set of euploid controls, where the information of fetal fraction is not fully utilized. Here we present a Bayesian method that leverages informative priors on the fetal fraction.MethodOur Bayesian method combines the Z-test likelihood and informative priors of the fetal fraction, which are learned from the sex chromosomes, to compute Bayes factors. Bayesian framework can account for nongenetic risk factors through the prior odds, and our method can report individual positive/negative predictive values.ResultsOur Bayesian method has more power than the Z-test method. We analyzed 3,405 NIPS samples and spotted at least 9 (of 51) possible Z-test false positives.ConclusionBayesian NIPS is more powerful than the Z-test method, is able to account for nongenetic risk factors through prior odds, and can report individual positive/negative predictive values.Genetics in Medicine advance online publication, 9 November 2017; doi:10.1038/gim.2017.186.
NASA Astrophysics Data System (ADS)
Wang, He; Zhang, Wen-Hao; Wong, K. Y. Michael; Wu, Si
Extensive studies suggest that the brain integrates multisensory signals in a Bayesian optimal way. However, it remains largely unknown how the sensory reliability and the prior information shape the neural architecture. In this work, we propose a biologically plausible neural field model, which can perform optimal multisensory integration and encode the whole profile of the posterior. Our model is composed of two modules, each for one modality. The crosstalks between the two modules can be carried out through feedforwad cross-links and reciprocal connections. We found that the reciprocal couplings are crucial to optimal multisensory integration in that the reciprocal coupling pattern is shaped by the correlation in the joint prior distribution of the sensory stimuli. A perturbative approach is developed to illustrate the relation between the prior information and features in coupling patterns quantitatively. Our results show that a decentralized architecture based on reciprocal connections is able to accommodate complex correlation structures across modalities and utilize this prior information in optimal multisensory integration. This work is supported by the Research Grants Council of Hong Kong (N_HKUST606/12 and 605813) and National Basic Research Program of China (2014CB846101) and the Natural Science Foundation of China (31261160495).
Francoeur, Richard B
2015-01-01
Background The majority of patients with advanced cancer experience symptom pairs or clusters among pain, fatigue, and insomnia. Improved methods are needed to detect and interpret interactions among symptoms or diesease markers to reveal influential pairs or clusters. In prior work, I developed and validated sequential residual centering (SRC), a method that improves the sensitivity of multiple regression to detect interactions among predictors, by conditioning for multicollinearity (shared variation) among interactions and component predictors. Materials and methods Using a hypothetical three-way interaction among pain, fatigue, and sleep to predict depressive affect, I derive and explain SRC multiple regression. Subsequently, I estimate raw and SRC multiple regressions using real data for these symptoms from 268 palliative radiation outpatients. Results Unlike raw regression, SRC reveals that the three-way interaction (pain × fatigue/weakness × sleep problems) is statistically significant. In follow-up analyses, the relationship between pain and depressive affect is aggravated (magnified) within two partial ranges: 1) complete-to-some control over fatigue/weakness when there is complete control over sleep problems (ie, a subset of the pain–fatigue/weakness symptom pair), and 2) no control over fatigue/weakness when there is some-to-no control over sleep problems (ie, a subset of the pain–fatigue/weakness–sleep problems symptom cluster). Otherwise, the relationship weakens (buffering) as control over fatigue/weakness or sleep problems diminishes. Conclusion By reducing the standard error, SRC unmasks a three-way interaction comprising a symptom pair and cluster. Low-to-moderate levels of the moderator variable for fatigue/weakness magnify the relationship between pain and depressive affect. However, when the comoderator variable for sleep problems accompanies fatigue/weakness, only frequent or unrelenting levels of both symptoms magnify the relationship. These findings suggest that a countervailing mechanism involving depressive affect could account for the effectiveness of a cognitive behavioral intervention to reduce the severity of a pain, fatigue, and sleep disturbance cluster in a previous randomized trial. PMID:25565865
22 CFR 129.8 - Prior notification.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Prior notification. 129.8 Section 129.8 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS REGISTRATION AND LICENSING OF...,000, except for sharing of basic marketing information (e.g., information that does not include...
Bayesian Phase II optimization for time-to-event data based on historical information.
Bertsche, Anja; Fleischer, Frank; Beyersmann, Jan; Nehmiz, Gerhard
2017-01-01
After exploratory drug development, companies face the decision whether to initiate confirmatory trials based on limited efficacy information. This proof-of-concept decision is typically performed after a Phase II trial studying a novel treatment versus either placebo or an active comparator. The article aims to optimize the design of such a proof-of-concept trial with respect to decision making. We incorporate historical information and develop pre-specified decision criteria accounting for the uncertainty of the observed treatment effect. We optimize these criteria based on sensitivity and specificity, given the historical information. Specifically, time-to-event data are considered in a randomized 2-arm trial with additional prior information on the control treatment. The proof-of-concept criterion uses treatment effect size, rather than significance. Criteria are defined on the posterior distribution of the hazard ratio given the Phase II data and the historical control information. Event times are exponentially modeled within groups, allowing for group-specific conjugate prior-to-posterior calculation. While a non-informative prior is placed on the investigational treatment, the control prior is constructed via the meta-analytic-predictive approach. The design parameters including sample size and allocation ratio are then optimized, maximizing the probability of taking the right decision. The approach is illustrated with an example in lung cancer.
Bayesian hierarchical functional data analysis via contaminated informative priors.
Scarpa, Bruno; Dunson, David B
2009-09-01
A variety of flexible approaches have been proposed for functional data analysis, allowing both the mean curve and the distribution about the mean to be unknown. Such methods are most useful when there is limited prior information. Motivated by applications to modeling of temperature curves in the menstrual cycle, this article proposes a flexible approach for incorporating prior information in semiparametric Bayesian analyses of hierarchical functional data. The proposed approach is based on specifying the distribution of functions as a mixture of a parametric hierarchical model and a nonparametric contamination. The parametric component is chosen based on prior knowledge, while the contamination is characterized as a functional Dirichlet process. In the motivating application, the contamination component allows unanticipated curve shapes in unhealthy menstrual cycles. Methods are developed for posterior computation, and the approach is applied to data from a European fecundability study.
Level set method for image segmentation based on moment competition
NASA Astrophysics Data System (ADS)
Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai
2015-05-01
We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.
2018-01-01
Abstract In real-world environments, humans comprehend speech by actively integrating prior knowledge (P) and expectations with sensory input. Recent studies have revealed effects of prior information in temporal and frontal cortical areas and have suggested that these effects are underpinned by enhanced encoding of speech-specific features, rather than a broad enhancement or suppression of cortical activity. However, in terms of the specific hierarchical stages of processing involved in speech comprehension, the effects of integrating bottom-up sensory responses and top-down predictions are still unclear. In addition, it is unclear whether the predictability that comes with prior information may differentially affect speech encoding relative to the perceptual enhancement that comes with that prediction. One way to investigate these issues is through examining the impact of P on indices of cortical tracking of continuous speech features. Here, we did this by presenting participants with degraded speech sentences that either were or were not preceded by a clear recording of the same sentences while recording non-invasive electroencephalography (EEG). We assessed the impact of prior information on an isolated index of cortical tracking that reflected phoneme-level processing. Our findings suggest the possibility that prior information affects the early encoding of natural speech in a dual manner. Firstly, the availability of prior information, as hypothesized, enhanced the perceived clarity of degraded speech, which was positively correlated with changes in phoneme-level encoding across subjects. In addition, P induced an overall reduction of this cortical measure, which we interpret as resulting from the increase in predictability. PMID:29662947
The Effects of Prior Knowledge Activation on Free Recall and Study Time Allocation.
ERIC Educational Resources Information Center
Machiels-Bongaerts, Maureen; And Others
The effects of mobilizing prior knowledge on information processing were studied. Two hypotheses, the cognitive set-point hypothesis and the selective attention hypothesis, try to account for the facilitation effects of prior knowledge activation. These hypotheses predict different recall patterns as a result of mobilizing prior knowledge. In…
21 CFR 1.280 - How must you submit prior notice?
Code of Federal Regulations, 2010 CFR
2010-04-01
... to FDA. You must submit all prior notice information in the English language, except that an... Commercial System (ABI/ACS); or (2) The FDA PNSI at http://www.access.fda.gov. You must submit prior notice through the FDA Prior Notice System Interface (FDA PNSI) for articles of food imported or offered for...
The Counter-Intuitive Non-Informative Prior for the Bernoulli Family
ERIC Educational Resources Information Center
Zhu, Mu; Lu, Arthur Y.
2004-01-01
In Bayesian statistics, the choice of the prior distribution is often controversial. Different rules for selecting priors have been suggested in the literature, which, sometimes, produce priors that are difficult for the students to understand intuitively. In this article, we use a simple heuristic to illustrate to the students the rather…
Promotion of cooperation in evolutionary game dynamics with local information.
Liu, Xuesong; Pan, Qiuhui; He, Mingfeng
2018-01-21
In this paper, we propose a strategy-updating rule driven by local information, which is called Local process. Unlike the standard Moran process, the Local process does not require global information about the strategic environment. By analyzing the dynamical behavior of the system, we explore how the local information influences the fixation of cooperation in two-player evolutionary games. Under weak selection, the decreasing local information leads to an increase of the fixation probability when natural selection does not favor cooperation replacing defection. In the limit of sufficiently large selection, the analytical results indicate that the fixation probability increases with the decrease of the local information, irrespective of the evolutionary games. Furthermore, for the dominance of defection games under weak selection and for coexistence games, the decreasing of local information will lead to a speedup of a single cooperator taking over the population. Overall, to some extent, the local information is conducive to promoting the cooperation. Copyright © 2017 Elsevier Ltd. All rights reserved.
van de Schoot, Rens; Broere, Joris J.; Perryck, Koen H.; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E.
2015-01-01
Background The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis. PMID:25765534
van de Schoot, Rens; Broere, Joris J; Perryck, Koen H; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E
2015-01-01
Background : The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods : First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results : Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion : We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis.
NASA Astrophysics Data System (ADS)
Udovydchenkov, Ilya A.
2017-07-01
Modal pulses are broadband contributions to an acoustic wave field with fixed mode number. Stable weakly dispersive modal pulses (SWDMPs) are special modal pulses that are characterized by weak dispersion and weak scattering-induced broadening and are thus suitable for communications applications. This paper investigates, using numerical simulations, receiver array requirements for recovering information carried by SWDMPs under various signal-to-noise ratio conditions without performing channel equalization. Two groups of weakly dispersive modal pulses are common in typical mid-latitude deep ocean environments: the lowest order modes (typically modes 1-3 at 75 Hz), and intermediate order modes whose waveguide invariant is near-zero (often around mode 20 at 75 Hz). Information loss is quantified by the bit error rate (BER) of a recovered binary phase-coded signal. With fixed receiver depths, low BERs (less than 1%) are achieved at ranges up to 400 km with three hydrophones for mode 1 with 90% probability and with 34 hydrophones for mode 20 with 80% probability. With optimal receiver depths, depending on propagation range, only a few, sometimes only two, hydrophones are often sufficient for low BERs, even with intermediate mode numbers. Full modal resolution is unnecessary to achieve low BERs. Thus, a flexible receiver array of autonomous vehicles can outperform a cabled array.
Heidari, M.; Ranjithan, S.R.
1998-01-01
In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.
Variational stereo imaging of oceanic waves with statistical constraints.
Gallego, Guillermo; Yezzi, Anthony; Fedele, Francesco; Benetazzo, Alvise
2013-11-01
An image processing observational technique for the stereoscopic reconstruction of the waveform of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi-Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired waveform is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained by combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.
Probabilistic cosmological mass mapping from weak lensing shear
Schneider, M. D.; Ng, K. Y.; Dawson, W. A.; ...
2017-04-10
Here, we infer gravitational lensing shear and convergence fields from galaxy ellipticity catalogs under a spatial process prior for the lensing potential. We demonstrate the performance of our algorithm with simulated Gaussian-distributed cosmological lensing shear maps and a reconstruction of the mass distribution of the merging galaxy cluster Abell 781 using galaxy ellipticities measured with the Deep Lens Survey. Given interim posterior samples of lensing shear or convergence fields on the sky, we describe an algorithm to infer cosmological parameters via lens field marginalization. In the most general formulation of our algorithm we make no assumptions about weak shear ormore » Gaussian-distributed shape noise or shears. Because we require solutions and matrix determinants of a linear system of dimension that scales with the number of galaxies, we expect our algorithm to require parallel high-performance computing resources for application to ongoing wide field lensing surveys.« less
Alpha and theta band dynamics related to sentential constraint and word expectancy.
Rommers, Joost; Dickson, Danielle S; Norton, James J S; Wlotko, Edward W; Federmeier, Kara D
2017-01-01
Despite strong evidence for prediction during language comprehension, the underlying mechanisms, and the extent to which they are specific to language, remain unclear. Re-analyzing an ERP study, we examined responses in the time-frequency domain to expected and unexpected (but plausible) words in strongly and weakly constraining sentences, and found results similar to those reported in nonverbal domains. Relative to expected words, unexpected words elicited an increase in the theta band (4-7 Hz) in strongly constraining contexts, suggesting the involvement of control processes to deal with the consequences of having a prediction disconfirmed. Prior to critical word onset, strongly constraining sentences exhibited a decrease in the alpha band (8-12 Hz) relative to weakly constraining sentences, suggesting that comprehenders can take advantage of predictive sentence contexts to prepare for the input. The results suggest that the brain recruits domain-general preparation and control mechanisms when making and assessing predictions during sentence comprehension.
Probabilistic Cosmological Mass Mapping from Weak Lensing Shear
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, M. D.; Dawson, W. A.; Ng, K. Y.
2017-04-10
We infer gravitational lensing shear and convergence fields from galaxy ellipticity catalogs under a spatial process prior for the lensing potential. We demonstrate the performance of our algorithm with simulated Gaussian-distributed cosmological lensing shear maps and a reconstruction of the mass distribution of the merging galaxy cluster Abell 781 using galaxy ellipticities measured with the Deep Lens Survey. Given interim posterior samples of lensing shear or convergence fields on the sky, we describe an algorithm to infer cosmological parameters via lens field marginalization. In the most general formulation of our algorithm we make no assumptions about weak shear or Gaussian-distributedmore » shape noise or shears. Because we require solutions and matrix determinants of a linear system of dimension that scales with the number of galaxies, we expect our algorithm to require parallel high-performance computing resources for application to ongoing wide field lensing surveys.« less
Project #OA-FY17-0139, Feb 15, 2017.The EPA OIG plans to begin preliminary research on an audit of EPA's processes for managing background investigations of privileged users and taking action to remediate weaknesses in agency's info security program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Le; Yu, Yu; Zhang, Pengjie, E-mail: lezhang@sjtu.edu.cn
Photo- z error is one of the major sources of systematics degrading the accuracy of weak-lensing cosmological inferences. Zhang et al. proposed a self-calibration method combining galaxy–galaxy correlations and galaxy–shear correlations between different photo- z bins. Fisher matrix analysis shows that it can determine the rate of photo- z outliers at a level of 0.01%–1% merely using photometric data and do not rely on any prior knowledge. In this paper, we develop a new algorithm to implement this method by solving a constrained nonlinear optimization problem arising in the self-calibration process. Based on the techniques of fixed-point iteration and non-negativemore » matrix factorization, the proposed algorithm can efficiently and robustly reconstruct the scattering probabilities between the true- z and photo- z bins. The algorithm has been tested extensively by applying it to mock data from simulated stage IV weak-lensing projects. We find that the algorithm provides a successful recovery of the scatter rates at the level of 0.01%–1%, and the true mean redshifts of photo- z bins at the level of 0.001, which may satisfy the requirements in future lensing surveys.« less
Adaptive allocation for binary outcomes using decreasingly informative priors.
Sabo, Roy T
2014-01-01
A method of outcome-adaptive allocation is presented using Bayes methods, where a natural lead-in is incorporated through the use of informative yet skeptical prior distributions for each treatment group. These prior distributions are modeled on unobserved data in such a way that their influence on the allocation scheme decreases as the trial progresses. Simulation studies show this method to behave comparably to the Bayesian adaptive allocation method described by Thall and Wathen (2007), who incorporate a natural lead-in through sample-size-based exponents.
Signal processing in local neuronal circuits based on activity-dependent noise and competition
NASA Astrophysics Data System (ADS)
Volman, Vladislav; Levine, Herbert
2009-09-01
We study the characteristics of weak signal detection by a recurrent neuronal network with plastic synaptic coupling. It is shown that in the presence of an asynchronous component in synaptic transmission, the network acquires selectivity with respect to the frequency of weak periodic stimuli. For nonperiodic frequency-modulated stimuli, the response is quantified by the mutual information between input (signal) and output (network's activity) and is optimized by synaptic depression. Introducing correlations in signal structure resulted in the decrease in input-output mutual information. Our results suggest that in neural systems with plastic connectivity, information is not merely carried passively by the signal; rather, the information content of the signal itself might determine the mode of its processing by a local neuronal circuit.
Spiegelhalter, D J; Freedman, L S
1986-01-01
The 'textbook' approach to determining sample size in a clinical trial has some fundamental weaknesses which we discuss. We describe a new predictive method which takes account of prior clinical opinion about the treatment difference. The method adopts the point of clinical equivalence (determined by interviewing the clinical participants) as the null hypothesis. Decision rules at the end of the study are based on whether the interval estimate of the treatment difference (classical or Bayesian) includes the null hypothesis. The prior distribution is used to predict the probabilities of making the decisions to use one or other treatment or to reserve final judgement. It is recommended that sample size be chosen to control the predicted probability of the last of these decisions. An example is given from a multi-centre trial of superficial bladder cancer.
2009-06-01
typically consists of a thermoset or thermoplastic polymer matrix reinforced with fibers that are much stronger and stiffer than the matrix. The PMCs are...high thermal or electrical conductivity, stealth characteristics , the ability to self-heal, communication, and sensor capabilities. The multi...have factual evidence of limitations and characteristics so as to utilize the material in a manner consistent with its strengths and weaknesses
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-06
... Proposed Information Collection to OMB Multifamily Project Applications and Construction Prior to Initial... facilities is also required as part of the application for firm commitment for mortgage insurance. Project owners/sponsors may apply for permission to commence construction prior to initial endorsement. DATES...
TESTING AUTOMATED SOLAR FLARE FORECASTING WITH 13 YEARS OF MICHELSON DOPPLER IMAGER MAGNETOGRAMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mason, J. P.; Hoeksema, J. T., E-mail: JMason86@sun.stanford.ed, E-mail: JTHoeksema@sun.stanford.ed
Flare occurrence is statistically associated with changes in several characteristics of the line-of-sight magnetic field in solar active regions (ARs). We calculated magnetic measures throughout the disk passage of 1075 ARs spanning solar cycle 23 to find a statistical relationship between the solar magnetic field and flares. This expansive study of over 71,000 magnetograms and 6000 flares uses superposed epoch (SPE) analysis to investigate changes in several magnetic measures surrounding flares and ARs completely lacking associated flares. The results were used to seek any flare associated signatures with the capability to recover weak systematic signals with SPE analysis. SPE analysismore » is a method of combining large sets of data series in a manner that yields concise information. This is achieved by aligning the temporal location of a specified flare in each time series, then calculating the statistical moments of the 'overlapping' data. The best-calculated parameter, the gradient-weighted inversion-line length (GWILL), combines the primary polarity inversion line (PIL) length and the gradient across it. Therefore, GWILL is sensitive to complex field structures via the length of the PIL and shearing via the gradient. GWILL shows an average 35% increase during the 40 hr prior to X-class flares, a 16% increase before M-class flares, and 17% increase prior to B-C-class flares. ARs not associated with flares tend to decrease in GWILL during their disk passage. Gilbert and Heidke skill scores are also calculated and show that even GWILL is not a reliable parameter for predicting solar flares in real time.« less
Calibrated birth-death phylogenetic time-tree priors for bayesian inference.
Heled, Joseph; Drummond, Alexei J
2015-05-01
Here we introduce a general class of multiple calibration birth-death tree priors for use in Bayesian phylogenetic inference. All tree priors in this class separate ancestral node heights into a set of "calibrated nodes" and "uncalibrated nodes" such that the marginal distribution of the calibrated nodes is user-specified whereas the density ratio of the birth-death prior is retained for trees with equal values for the calibrated nodes. We describe two formulations, one in which the calibration information informs the prior on ranked tree topologies, through the (conditional) prior, and the other which factorizes the prior on divergence times and ranked topologies, thus allowing uniform, or any arbitrary prior distribution on ranked topologies. Although the first of these formulations has some attractive properties, the algorithm we present for computing its prior density is computationally intensive. However, the second formulation is always faster and computationally efficient for up to six calibrations. We demonstrate the utility of the new class of multiple-calibration tree priors using both small simulations and a real-world analysis and compare the results to existing schemes. The two new calibrated tree priors described in this article offer greater flexibility and control of prior specification in calibrated time-tree inference and divergence time dating, and will remove the need for indirect approaches to the assessment of the combined effect of calibration densities and tree priors in Bayesian phylogenetic inference. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Fan, Yue; Wang, Xiao; Peng, Qinke
2017-01-01
Gene regulatory networks (GRNs) play an important role in cellular systems and are important for understanding biological processes. Many algorithms have been developed to infer the GRNs. However, most algorithms only pay attention to the gene expression data but do not consider the topology information in their inference process, while incorporating this information can partially compensate for the lack of reliable expression data. Here we develop a Bayesian group lasso with spike and slab priors to perform gene selection and estimation for nonparametric models. B-spline basis functions are used to capture the nonlinear relationships flexibly and penalties are used to avoid overfitting. Further, we incorporate the topology information into the Bayesian method as a prior. We present the application of our method on DREAM3 and DREAM4 datasets and two real biological datasets. The results show that our method performs better than existing methods and the topology information prior can improve the result.
Bridging gaps in health information systems: a case study from Somaliland, Somalia.
Askar, Ahmed; Ardakani, Malekafzali; Majdzade, Reza
2018-01-02
Reliable and timely health information is fundamental for health information systems (HIS) to work effectively. This case study aims to assess Somaliland HIS in terms of its contextual situation, major weaknesses and proposes key evidence-based recommendations. Data were collected through national level key informants' interviews, observations, group discussion and scoring using the HIS framework and assessment tool developed by World Health Organization Health Metrics Network (WHO/HMN). The study found major weaknesses including: no policy, strategic plan and legal framework in place; fragmented sub-information systems; Poor information and communications technology (ICT) infrastructure; poorly motivated and under-skilled personnel; dependence on unsustainable external funds; no census or civil registration in place; data from private health sector not captured; insufficient technical capacity to analyse data collected by HIS; and information is not widely shared, disseminated or utilized for decision-making. We recommend developing a national HIS strategic plan that harmonizes and directs collective efforts to become a more integrated, cost-effective and sustainable HIS.
Van den Bulcke, Bo; Vyt, Andre; Vanheule, Stijn; Hoste, Eric; Decruyenaere, Johan; Benoit, Dominique
2016-05-01
This article describes a study that evaluated the quality of teamwork in a surgical intensive care unit and assessed whether teamwork could be improved significantly through a tailor-made intervention. The quality of teamwork prior to and after the intervention was assessed using the Interprofessional Practice and Education Quality Scales (IPEQS) using the PROSE online diagnostics and documenting system, which assesses three domains of teamwork: organisational factors, care processes, and team members' attitudes and beliefs. Furthermore, team members evaluated strengths and weaknesses of the teamwork through open-ended questions. Information gathered by means of the open questions was used to design a tailor-made 12-week intervention consisting of (1) optimising the existing weekly interdisciplinary meetings with collaborative decision-making and clear communication of goal-oriented actions, including the psychosocial aspects of care; and (2) organising and supporting the effective exchange of information over time between all professions involved. It was found that the intervention had a significant impact on organisational factors and care processes related to interprofessional teamwork for the total group and within all subgroups, despite baseline differences between the subgroups in interprofessional teamwork. In conclusion, teamwork, and more particularly the organisational aspects of interprofessional collaboration and processes of care, can be improved by a tailor-made intervention that takes into account the professional needs of healthcare workers.
Influence of prior information on pain involves biased perceptual decision-making.
Wiech, Katja; Vandekerckhove, Joachim; Zaman, Jonas; Tuerlinckx, Francis; Vlaeyen, Johan W S; Tracey, Irene
2014-08-04
Prior information about features of a stimulus is a strong modulator of perception. For instance, the prospect of more intense pain leads to an increased perception of pain, whereas the expectation of analgesia reduces pain, as shown in placebo analgesia and expectancy modulations during drug administration. This influence is commonly assumed to be rooted in altered sensory processing and expectancy-related modulations in the spinal cord, are often taken as evidence for this notion. Contemporary models of perception, however, suggest that prior information can also modulate perception by biasing perceptual decision-making - the inferential process underlying perception in which prior information is used to interpret sensory information. In this type of bias, the information is already present in the system before the stimulus is observed. Computational models can distinguish between changes in sensory processing and altered decision-making as they result in different response times for incorrect choices in a perceptual decision-making task (Figure S1A,B). Using a drift-diffusion model, we investigated the influence of both processes in two independent experiments. The results of both experiments strongly suggest that these changes in pain perception are predominantly based on altered perceptual decision-making. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Gradient-based reliability maps for ACM-based segmentation of hippocampus.
Zarpalas, Dimitrios; Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos
2014-04-01
Automatic segmentation of deep brain structures, such as the hippocampus (HC), in MR images has attracted considerable scientific attention due to the widespread use of MRI and to the principal role of some structures in various mental disorders. In this literature, there exists a substantial amount of work relying on deformable models incorporating prior knowledge about structures' anatomy and shape information. However, shape priors capture global shape characteristics and thus fail to model boundaries of varying properties; HC boundaries present rich, poor, and missing gradient regions. On top of that, shape prior knowledge is blended with image information in the evolution process, through global weighting of the two terms, again neglecting the spatially varying boundary properties, causing segmentation faults. An innovative method is hereby presented that aims to achieve highly accurate HC segmentation in MR images, based on the modeling of boundary properties at each anatomical location and the inclusion of appropriate image information for each of those, within an active contour model framework. Hence, blending of image information and prior knowledge is based on a local weighting map, which mixes gradient information, regional and whole brain statistical information with a multi-atlas-based spatial distribution map of the structure's labels. Experimental results on three different datasets demonstrate the efficacy and accuracy of the proposed method.
Hobbs, Brian P.; Carlin, Bradley P.; Mandrekar, Sumithra J.; Sargent, Daniel J.
2011-01-01
Summary Bayesian clinical trial designs offer the possibility of a substantially reduced sample size, increased statistical power, and reductions in cost and ethical hazard. However when prior and current information conflict, Bayesian methods can lead to higher than expected Type I error, as well as the possibility of a costlier and lengthier trial. This motivates an investigation of the feasibility of hierarchical Bayesian methods for incorporating historical data that are adaptively robust to prior information that reveals itself to be inconsistent with the accumulating experimental data. In this paper, we present several models that allow for the commensurability of the information in the historical and current data to determine how much historical information is used. A primary tool is elaborating the traditional power prior approach based upon a measure of commensurability for Gaussian data. We compare the frequentist performance of several methods using simulations, and close with an example of a colon cancer trial that illustrates a linear models extension of our adaptive borrowing approach. Our proposed methods produce more precise estimates of the model parameters, in particular conferring statistical significance to the observed reduction in tumor size for the experimental regimen as compared to the control regimen. PMID:21361892
Influence of social norms and palatability on amount consumed and food choice.
Pliner, Patricia; Mann, Nikki
2004-04-01
In two parallel studies, we examined the effect of social influence and palatability on amount consumed and on food choice. In Experiment 1, which looked at amount consumed, participants were provided with either palatable or unpalatable food; they were also given information about how much previous participants had eaten (large or small amounts) or were given no information. In the case of palatable food, participants ate more when led to believe that prior participants had eaten a great deal than when led to believe that prior participants had eaten small amounts or when provided with no information. This social-influence effect was not present when participants received unpalatable food. In Experiment 2, which looked at food choice, some participants learned that prior participants had chosen the palatable food, others learned that prior participants had chosen the unpalatable food, while still others received no information about prior participants' choices. The social-influence manipulation had no effect on participants' food choices; nearly all of them chose the palatable food. The results were discussed in the context of Churchfield's (1995) distinction between judgments about matters of fact and judgments about preferences. The results were also used to illustrate the importance of palatability as a determinant of eating behavior.
What Are They Up To? The Role of Sensory Evidence and Prior Knowledge in Action Understanding
Chambon, Valerian; Domenech, Philippe; Pacherie, Elisabeth; Koechlin, Etienne; Baraduc, Pierre; Farrer, Chlöé
2011-01-01
Explaining or predicting the behaviour of our conspecifics requires the ability to infer the intentions that motivate it. Such inferences are assumed to rely on two types of information: (1) the sensory information conveyed by movement kinematics and (2) the observer's prior expectations – acquired from past experience or derived from prior knowledge. However, the respective contribution of these two sources of information is still controversial. This controversy stems in part from the fact that “intention” is an umbrella term that may embrace various sub-types each being assigned different scopes and targets. We hypothesized that variations in the scope and target of intentions may account for variations in the contribution of visual kinematics and prior knowledge to the intention inference process. To test this hypothesis, we conducted four behavioural experiments in which participants were instructed to identify different types of intention: basic intentions (i.e. simple goal of a motor act), superordinate intentions (i.e. general goal of a sequence of motor acts), or social intentions (i.e. intentions accomplished in a context of reciprocal interaction). For each of the above-mentioned intentions, we varied (1) the amount of visual information available from the action scene and (2) participant's prior expectations concerning the intention that was more likely to be accomplished. First, we showed that intentional judgments depend on a consistent interaction between visual information and participant's prior expectations. Moreover, we demonstrated that this interaction varied according to the type of intention to be inferred, with participant's priors rather than perceptual evidence exerting a greater effect on the inference of social and superordinate intentions. The results are discussed by appealing to the specific properties of each type of intention considered and further interpreted in the light of a hierarchical model of action representation. PMID:21364992
Dunham, Kylee; Grand, James B.
2016-01-01
We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.
Weak Measurement and Quantum Smoothing of a Superconducting Qubit
NASA Astrophysics Data System (ADS)
Tan, Dian
In quantum mechanics, the measurement outcome of an observable in a quantum system is intrinsically random, yielding a probability distribution. The state of the quantum system can be described by a density matrix rho(t), which depends on the information accumulated until time t, and represents our knowledge about the system. The density matrix rho(t) gives probabilities for the outcomes of measurements at time t. Further probing of the quantum system allows us to refine our prediction in hindsight. In this thesis, we experimentally examine a quantum smoothing theory in a superconducting qubit by introducing an auxiliary matrix E(t) which is conditioned on information obtained from time t to a final time T. With the complete information before and after time t, the pair of matrices [rho(t), E(t)] can be used to make smoothed predictions for the measurement outcome at time t. We apply the quantum smoothing theory in the case of continuous weak measurement unveiling the retrodicted quantum trajectories and weak values. In the case of strong projective measurement, while the density matrix rho(t) with only diagonal elements in a given basis |n〉 may be treated as a classical mixture, we demonstrate a failure of this classical mixture description in determining the smoothed probabilities for the measurement outcome at time t with both diagonal rho(t) and diagonal E(t). We study the correlations between quantum states and weak measurement signals and examine aspects of the time symmetry of continuous quantum measurement. We also extend our study of quantum smoothing theory to the case of resonance fluorescence of a superconducting qubit with homodyne measurement and observe some interesting effects such as the modification of the excited state probabilities, weak values, and evolution of the predicted and retrodicted trajectories.
ERIC Educational Resources Information Center
Novick, Laura R.; Catley, Kefyn M.
2014-01-01
Science is an important domain for investigating students' responses to information that contradicts their prior knowledge. In previous studies of this topic, this information was communicated verbally. The present research used diagrams, specifically trees (cladograms) depicting evolutionary relationships among taxa. Effects of college…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-01
... Food Under the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 AGENCY... appropriate, and other forms of information technology. Prior Notice of Imported Food Under the Public Health... 0910-0520)--Revision The Public Health Security and Bioterrorism Preparedness and Response Act of 2002...
Ten-Month-Old Infants Use Prior Information to Identify an Actor's Goal
ERIC Educational Resources Information Center
Sommerville, Jessica A.; Crane, Catharyn C.
2009-01-01
For adults, prior information about an individual's likely goals, preferences or dispositions plays a powerful role in interpreting ambiguous behavior and predicting and interpreting behavior in novel contexts. Across two studies, we investigated whether 10-month-old infants' ability to identify the goal of an ambiguous action sequence was…
78 FR 32359 - Information Required in Prior Notice of Imported Food
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-30
... or animal food based on food safety reasons, such as intentional or unintentional contamination of an... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 1 [Docket No. FDA-2011-N-0179] RIN 0910-AG65 Information Required in Prior Notice of Imported Food AGENCY: Food and Drug...
40 CFR 60.2953 - What information must I submit prior to initial startup?
Code of Federal Regulations, 2011 CFR
2011-07-01
... initial startup? 60.2953 Section 60.2953 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... initial startup? You must submit the information specified in paragraphs (a) through (e) of this section prior to initial startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning...
40 CFR 60.2195 - What information must I submit prior to initial startup?
Code of Federal Regulations, 2011 CFR
2011-07-01
... initial startup? 60.2195 Section 60.2195 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY..., 2001 Recordkeeping and Reporting § 60.2195 What information must I submit prior to initial startup? You... startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning capacity. (c) The...
40 CFR 60.2953 - What information must I submit prior to initial startup?
Code of Federal Regulations, 2012 CFR
2012-07-01
... initial startup? 60.2953 Section 60.2953 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... initial startup? You must submit the information specified in paragraphs (a) through (e) of this section prior to initial startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning...
40 CFR 60.2953 - What information must I submit prior to initial startup?
Code of Federal Regulations, 2010 CFR
2010-07-01
... initial startup? 60.2953 Section 60.2953 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... initial startup? You must submit the information specified in paragraphs (a) through (e) of this section prior to initial startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning...
40 CFR 60.2195 - What information must I submit prior to initial startup?
Code of Federal Regulations, 2012 CFR
2012-07-01
... initial startup? 60.2195 Section 60.2195 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY..., 2001 Recordkeeping and Reporting § 60.2195 What information must I submit prior to initial startup? You... startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning capacity. (c) The...
40 CFR 60.2195 - What information must I submit prior to initial startup?
Code of Federal Regulations, 2010 CFR
2010-07-01
... initial startup? 60.2195 Section 60.2195 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY..., 2001 Recordkeeping and Reporting § 60.2195 What information must I submit prior to initial startup? You... startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning capacity. (c) The...
Linguraru, Marius George; Hori, Masatoshi; Summers, Ronald M; Tomiyama, Noriyuki
2015-01-01
This paper addresses the automated segmentation of multiple organs in upper abdominal computed tomography (CT) data. The aim of our study is to develop methods to effectively construct the conditional priors and use their prediction power for more accurate segmentation as well as easy adaptation to various imaging conditions in CT images, as observed in clinical practice. We propose a general framework of multi-organ segmentation which effectively incorporates interrelations among multiple organs and easily adapts to various imaging conditions without the need for supervised intensity information. The features of the framework are as follows: (1) A method for modeling conditional shape and location (shape–location) priors, which we call prediction-based priors, is developed to derive accurate priors specific to each subject, which enables the estimation of intensity priors without the need for supervised intensity information. (2) Organ correlation graph is introduced, which defines how the conditional priors are constructed and segmentation processes of multiple organs are executed. In our framework, predictor organs, whose segmentation is sufficiently accurate by using conventional single-organ segmentation methods, are pre-segmented, and the remaining organs are hierarchically segmented using conditional shape–location priors. The proposed framework was evaluated through the segmentation of eight abdominal organs (liver, spleen, left and right kidneys, pancreas, gallbladder, aorta, and inferior vena cava) from 134 CT data from 86 patients obtained under six imaging conditions at two hospitals. The experimental results show the effectiveness of the proposed prediction-based priors and the applicability to various imaging conditions without the need for supervised intensity information. Average Dice coefficients for the liver, spleen, and kidneys were more than 92%, and were around 73% and 67% for the pancreas and gallbladder, respectively. PMID:26277022
Bayes factors for testing inequality constrained hypotheses: Issues with prior specification.
Mulder, Joris
2014-02-01
Several issues are discussed when testing inequality constrained hypotheses using a Bayesian approach. First, the complexity (or size) of the inequality constrained parameter spaces can be ignored. This is the case when using the posterior probability that the inequality constraints of a hypothesis hold, Bayes factors based on non-informative improper priors, and partial Bayes factors based on posterior priors. Second, the Bayes factor may not be invariant for linear one-to-one transformations of the data. This can be observed when using balanced priors which are centred on the boundary of the constrained parameter space with a diagonal covariance structure. Third, the information paradox can be observed. When testing inequality constrained hypotheses, the information paradox occurs when the Bayes factor of an inequality constrained hypothesis against its complement converges to a constant as the evidence for the first hypothesis accumulates while keeping the sample size fixed. This paradox occurs when using Zellner's g prior as a result of too much prior shrinkage. Therefore, two new methods are proposed that avoid these issues. First, partial Bayes factors are proposed based on transformed minimal training samples. These training samples result in posterior priors that are centred on the boundary of the constrained parameter space with the same covariance structure as in the sample. Second, a g prior approach is proposed by letting g go to infinity. This is possible because the Jeffreys-Lindley paradox is not an issue when testing inequality constrained hypotheses. A simulation study indicated that the Bayes factor based on this g prior approach converges fastest to the true inequality constrained hypothesis. © 2013 The British Psychological Society.
Okada, Toshiyuki; Linguraru, Marius George; Hori, Masatoshi; Summers, Ronald M; Tomiyama, Noriyuki; Sato, Yoshinobu
2015-12-01
This paper addresses the automated segmentation of multiple organs in upper abdominal computed tomography (CT) data. The aim of our study is to develop methods to effectively construct the conditional priors and use their prediction power for more accurate segmentation as well as easy adaptation to various imaging conditions in CT images, as observed in clinical practice. We propose a general framework of multi-organ segmentation which effectively incorporates interrelations among multiple organs and easily adapts to various imaging conditions without the need for supervised intensity information. The features of the framework are as follows: (1) A method for modeling conditional shape and location (shape-location) priors, which we call prediction-based priors, is developed to derive accurate priors specific to each subject, which enables the estimation of intensity priors without the need for supervised intensity information. (2) Organ correlation graph is introduced, which defines how the conditional priors are constructed and segmentation processes of multiple organs are executed. In our framework, predictor organs, whose segmentation is sufficiently accurate by using conventional single-organ segmentation methods, are pre-segmented, and the remaining organs are hierarchically segmented using conditional shape-location priors. The proposed framework was evaluated through the segmentation of eight abdominal organs (liver, spleen, left and right kidneys, pancreas, gallbladder, aorta, and inferior vena cava) from 134 CT data from 86 patients obtained under six imaging conditions at two hospitals. The experimental results show the effectiveness of the proposed prediction-based priors and the applicability to various imaging conditions without the need for supervised intensity information. Average Dice coefficients for the liver, spleen, and kidneys were more than 92%, and were around 73% and 67% for the pancreas and gallbladder, respectively. Copyright © 2015 Elsevier B.V. All rights reserved.
Can quantum probes satisfy the weak equivalence principle?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seveso, Luigi, E-mail: luigi.seveso@unimi.it; Paris, Matteo G.A.; INFN, Sezione di Milano, I-20133 Milano
We address the question whether quantum probes in a gravitational field can be considered as test particles obeying the weak equivalence principle (WEP). A formulation of the WEP is proposed which applies also in the quantum regime, while maintaining the physical content of its classical counterpart. Such formulation requires the introduction of a gravitational field not to modify the Fisher information about the mass of a freely-falling probe, extractable through measurements of its position. We discover that, while in a uniform field quantum probes satisfy our formulation of the WEP exactly, gravity gradients can encode nontrivial information about the particle’smore » mass in its wavefunction, leading to violations of the WEP. - Highlights: • Can quantum probes under gravity be approximated as test-bodies? • A formulation of the weak equivalence principle for quantum probes is proposed. • Quantum probes are found to violate it as a matter of principle.« less
Arshad, Hamed; Teymoori, Vahid; Nikooghadam, Morteza; Abbassi, Hassan
2015-08-01
Telecare medicine information systems (TMISs) aim to deliver appropriate healthcare services in an efficient and secure manner to patients. A secure mechanism for authentication and key agreement is required to provide proper security in these systems. Recently, Bin Muhaya demonstrated some security weaknesses of Zhu's authentication and key agreement scheme and proposed a security enhanced authentication and key agreement scheme for TMISs. However, we show that Bin Muhaya's scheme is vulnerable to off-line password guessing attacks and does not provide perfect forward secrecy. Furthermore, in order to overcome the mentioned weaknesses, we propose a new two-factor anonymous authentication and key agreement scheme using the elliptic curve cryptosystem. Security and performance analyses demonstrate that the proposed scheme not only overcomes the weaknesses of Bin Muhaya's scheme, but also is about 2.73 times faster than Bin Muhaya's scheme.
Agreement and Predictive Validity Using Less Conservative FNIH Sarcopenia Project Weakness Cutpoints
Shaffer, Nancy Chiles; Ferrucci, Luigi; Shardell, Michelle; Simonsick, Eleanor M.; Studenski, Stephanie
2016-01-01
OBJECTIVES The FNIH Sarcopenia Project derived conservative definitions for weakness and low lean mass, resulting in low prevalence and low agreement with prior definitions. The FNIH Project also estimated a less conservative cutpoint for low grip strength, potentially yielding a cutpoint for low lean mass more consistent with the European Working Group on Sarcopenia in Older People (EWGSOP). We derived lean mass cutpoints based on the less conservative cutpoint for grip strength (WeakI), and assessed agreement with EWGSOP and prediction of incident slow walking and mortality. DESIGN, SETTING, PARTICIPANTS, MEASUREMENTS Longitudinal analysis of 287 men and 258 women from the Baltimore Longitudinal Study of Aging aged >65 years, with 2–10 years followup. Weakness was determined via hand dynamometer, appendicular lean mass (ALM) via DEXA, and slow walking by 6m usual pace walk <0.8m/s. Analyses used classification and regression tree analysis, Cohen’s Kappa, and Cox models. RESULTS Cutpoints derived from WeakI for ALM (ALMI) and ALM adjusted for body mass index (ALM/BMII) were (ALMI) <21.4kg (men) and <14.1kg (women); and (ALM/BMII) <0.725 (men) and <0.591 (women). Kappas with EWGSOP were (ALMI); 0.65 (men) and 0.75 (women) and ALM/BMII; 0.34 (men) and 0.47 (women). In men, the hazard ratio for incident slow walking by WeakI + ALMI was 2.44 (95% CI:1.02–5.82) versus 2.91 (95% CI:1.11–7.62) by EWGSOP. Neither approach predicted incident slow walking in women. CONCLUSION The ALMI cutpoints agree with EWGSOP and predict slow walking in men. Future studies should explore sex differences in the relationship between body composition and physical function and the impact of change in muscle mass on muscle strength and physical function. PMID:28024092
Jacquet, Pierre O.; Roy, Alice C.; Chambon, Valérian; Borghi, Anna M.; Salemme, Roméo; Farnè, Alessandro; Reilly, Karen T.
2016-01-01
Predicting intentions from observing another agent’s behaviours is often thought to depend on motor resonance – i.e., the motor system’s response to a perceived movement by the activation of its stored motor counterpart, but observers might also rely on prior expectations, especially when actions take place in perceptually uncertain situations. Here we assessed motor resonance during an action prediction task using transcranial magnetic stimulation to probe corticospinal excitability (CSE) and report that experimentally-induced updates in observers’ prior expectations modulate CSE when predictions are made under situations of perceptual uncertainty. We show that prior expectations are updated on the basis of both biomechanical and probabilistic prior information and that the magnitude of the CSE modulation observed across participants is explained by the magnitude of change in their prior expectations. These findings provide the first evidence that when observers predict others’ intentions, motor resonance mechanisms adapt to changes in their prior expectations. We propose that this adaptive adjustment might reflect a regulatory control mechanism that shares some similarities with that observed during action selection. Such a mechanism could help arbitrate the competition between biomechanical and probabilistic prior information when appropriate for prediction. PMID:27243157
Jacquet, Pierre O; Roy, Alice C; Chambon, Valérian; Borghi, Anna M; Salemme, Roméo; Farnè, Alessandro; Reilly, Karen T
2016-05-31
Predicting intentions from observing another agent's behaviours is often thought to depend on motor resonance - i.e., the motor system's response to a perceived movement by the activation of its stored motor counterpart, but observers might also rely on prior expectations, especially when actions take place in perceptually uncertain situations. Here we assessed motor resonance during an action prediction task using transcranial magnetic stimulation to probe corticospinal excitability (CSE) and report that experimentally-induced updates in observers' prior expectations modulate CSE when predictions are made under situations of perceptual uncertainty. We show that prior expectations are updated on the basis of both biomechanical and probabilistic prior information and that the magnitude of the CSE modulation observed across participants is explained by the magnitude of change in their prior expectations. These findings provide the first evidence that when observers predict others' intentions, motor resonance mechanisms adapt to changes in their prior expectations. We propose that this adaptive adjustment might reflect a regulatory control mechanism that shares some similarities with that observed during action selection. Such a mechanism could help arbitrate the competition between biomechanical and probabilistic prior information when appropriate for prediction.
NASA Astrophysics Data System (ADS)
Farnes, J. S.; Rudnick, L.; Gaensler, B. M.; Haverkorn, M.; O'Sullivan, S. P.; Curran, S. J.
2017-06-01
Protogalactic environments are typically identified using quasar absorption lines and can manifest as Damped Lyman-alpha Absorbers (DLAs) and Lyman Limit Systems (LLSs). We use radio observations of Faraday effects to test whether these galactic building blocks host a magnetized medium, by combining DLA and LLS detections with 1.4 GHz polarization data from the NRAO VLA Sky Survey (NVSS). We obtain a control, a DLA, and an LLS sample consisting of 114, 19, and 27 lines of sight, respectively. Using a Bayesian framework and weakly informative priors, we are unable to detect either coherent or random magnetic fields in DLAs: the regular coherent fields must be ≤slant 2.8 μG, and the lack of depolarization suggests the weakly magnetized gas in DLAs is non-turbulent and quiescent. However, we find a mild suggestive indication that LLSs have coherent magnetic fields, with a 71.5% probability that LLSs have higher | {RM}| than a control, although this is sensitive to the redshift distribution. We also find a strong indication that LLSs host random magnetic fields, with a 95.5% probability that LLS lines of sight have lower polarized fractions than a control. The regular coherent fields within the LLSs must be ≤slant 2.4 μG, and the magnetized gas must be highly turbulent with a typical turbulent length scale on the order of ≈5-20 pc. Our results are consistent with the standard dynamo paradigm, whereby magnetism in protogalaxies increases in coherence over cosmic time, and with a hierarchical galaxy formation scenario, with the DLAs and LLSs exploring different stages of magnetic field evolution in galaxies.
Magnification bias as a novel probe for primordial magnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Camera, S.; Fedeli, C.; Moscardini, L., E-mail: stefano.camera@tecnico.ulisboa.pt, E-mail: cosimo.fedeli@oabo.inaf.it, E-mail: lauro.moscardini@unibo.it
2014-03-01
In this paper we investigate magnetic fields generated in the early Universe. These fields are important candidates at explaining the origin of astrophysical magnetism observed in galaxies and galaxy clusters, whose genesis is still by and large unclear. Compared to the standard inflationary power spectrum, intermediate to small scales would experience further substantial matter clustering, were a cosmological magnetic field present prior to recombination. As a consequence, the bias and redshift distribution of galaxies would also be modified. Hitherto, primordial magnetic fields (PMFs) have been tested and constrained with a number of cosmological observables, e.g. the cosmic microwave background radiation,more » galaxy clustering and, more recently, weak gravitational lensing. Here, we explore the constraining potential of the density fluctuation bias induced by gravitational lensing magnification onto the galaxy-galaxy angular power spectrum. Such an effect is known as magnification bias. Compared to the usual galaxy clustering approach, magnification bias helps in lifting the pathological degeneracy present amongst power spectrum normalisation and galaxy bias. This is because magnification bias cross-correlates galaxy number density fluctuations of nearby objects with weak lensing distortions of high-redshift sources. Thus, it takes advantage of the gravitational deflection of light, which is insensitive to galaxy bias but powerful in constraining the density fluctuation amplitude. To scrutinise the potentiality of this method, we adopt a deep and wide-field spectroscopic galaxy survey. We show that magnification bias does contain important information on primordial magnetism, which will be useful in combination with galaxy clustering and shear. We find we shall be able to rule out at 95.4% CL amplitudes of PMFs larger than 5 × 10{sup −4} nG for values of the PMF power spectral index n{sub B} ∼ 0.« less
Comparing hard and soft prior bounds in geophysical inverse problems
NASA Technical Reports Server (NTRS)
Backus, George E.
1988-01-01
In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describes the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.
Comparing hard and soft prior bounds in geophysical inverse problems
NASA Technical Reports Server (NTRS)
Backus, George E.
1987-01-01
In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describeds the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.
Debunking vaccination myths: strong risk negations can increase perceived vaccination risks.
Betsch, Cornelia; Sachse, Katharina
2013-02-01
Information about risks is often contradictory, especially in the health domain. A vast amount of bizarre information on vaccine-adverse events (VAE) can be found on the Internet; most are posted by antivaccination activists. Several actors in the health sector struggle against these statements by negating claimed risks with scientific explanations. The goal of the present work is to find optimal ways of negating risk to decrease risk perceptions. In two online experiments, we varied the extremity of risk negations and their source. Perception of the probability of VAE, their expected severity (both variables serve as indicators of perceived risk), and vaccination intentions. Paradoxically, messages strongly indicating that there is "no risk" led to a higher perceived vaccination risk than weak negations. This finding extends previous work on the negativity bias, which has shown that information stating the presence of risk decreases risk perceptions, while information negating the existence of risk increases such perceptions. Several moderators were also tested; however, the effect occurred independently of the number of negations, recipient involvement, and attitude. Solely the credibility of the information source interacted with the extremity of risk negation: For credible sources (governmental institutions), strong and weak risk negations lead to similar perceived risk, while for less credible sources (pharmaceutical industries) weak negations lead to less perceived risk than strong negations. Optimal risk negation may profit from moderate rather than extreme formulations as a source's trustworthiness can vary.
Explanation and Prior Knowledge Interact to Guide Learning
ERIC Educational Resources Information Center
Williams, Joseph J.; Lombrozo, Tania
2013-01-01
How do explaining and prior knowledge contribute to learning? Four experiments explored the relationship between explanation and prior knowledge in category learning. The experiments independently manipulated whether participants were prompted to explain the category membership of study observations and whether category labels were informative in…
Byram, Ian R; Bushnell, Brandon D; Dugger, Keith; Charron, Kevin; Harrell, Frank E; Noonan, Thomas J
2010-07-01
The ability to identify pitchers at risk for injury could be valuable to a professional baseball organization. To our knowledge, there have been no prior studies examining the predictive value of preseason strength measurements. Preseason weakness of shoulder external rotators is associated with increased risk of in-season throwing-related injury in professional baseball pitchers. Cohort study (prognosis); Level of evidence, 2. Preseason shoulder strength was measured for all pitchers in a professional baseball organization over a 5-year period (2001-2005). Prone internal rotation (IR), prone external rotation (PER), seated external rotation (SER), and supraspinatus (SS) strength were tested during spring training before each season. The players were then prospectively followed throughout the season for incidence of throwing-related injury. Injuries were categorized on an ordinal scale, with no injury, injury treated conservatively, and injury resulting in surgery delineated 0, 1, and 2, respectively. Subset analyses of shoulder injuries and of players with prior surgery were also performed. The association between strength measurements and injury was analyzed using Spearman rank correlation. A statistically significant association was observed for PER strength (P = .003), SER strength (P = .048), and SS strength (P = .006) with throwing-related injury requiring surgical intervention. Supraspinatus strength was also significantly associated with incidence of any shoulder injury (P = .031). There was an association between the ratio of PER/IR strength and incidence of shoulder injury (P = .037) and some evidence for an association with overall incidence of throwing-related injury (P = .051). No associations were noted in the subgroup of players with prior surgery. Preseason weakness of external rotation and SS strength is associated with in-season throwing-related injury resulting in surgical intervention in professional baseball pitchers. Thus, preseason strength data may help identify players at risk for injury and formulate strengthening plans for prevention.
Photodiode Preamplifier for Laser Ranging With Weak Signals
NASA Technical Reports Server (NTRS)
Abramovici, Alexander; Chapsky, Jacob
2007-01-01
An improved preamplifier circuit has been designed for processing the output of an avalanche photodiode (APD) that is used in a high-resolution laser ranging system to detect laser pulses returning from a target. The improved circuit stands in contrast to prior such circuits in which the APD output current pulses are made to pass, variously, through wide-band or narrow-band load networks before preamplification. A major disadvantage of the prior wide-band load networks is that they are highly susceptible to noise, which degrades timing resolution. A major disadvantage of the prior narrow-band load networks is that they make it difficult to sample the amplitudes of the narrow laser pulses ordinarily used in ranging. In the improved circuit, a load resistor is connected to the APD output and its value is chosen so that the time constant defined by this resistance and the APD capacitance is large, relative to the duration of a laser pulse. The APD capacitance becomes initially charged by the pulse of current generated by a return laser pulse, so that the rise time of the load-network output is comparable to the duration of the return pulse. Thus, the load-network output is characterized by a fast-rising leading edge, which is necessary for accurate pulse timing. On the other hand, the resistance-capacitance combination constitutes a lowpass filter, which helps to suppress noise. The long time constant causes the load network output pulse to have a long shallow-sloping trailing edge, which makes it easy to sample the amplitude of the return pulse. The output of the load network is fed to a low-noise, wide-band amplifier. The amplifier must be a wide-band one in order to preserve the sharp pulse rise for timing. The suppression of noise and the use of a low-noise amplifier enable the ranging system to detect relatively weak return pulses.
Role of Weak Measurements on States Ordering and Monogamy of Quantum Correlation
NASA Astrophysics Data System (ADS)
Hu, Ming-Liang; Fan, Heng; Tian, Dong-Ping
2015-01-01
The information-theoretic definition of quantum correlation, e.g., quantum discord, is measurement dependent. By considering the more general quantum measurements, weak measurements, which include the projective measurement as a limiting case, we show that while weak measurements can enable one to capture more quantumness of correlation in a state, it can also induce other counterintuitive quantum effects. Specifically, we show that the general measurements with different strengths can impose different orderings for quantum correlations of some states. It can also modify the monogamous character for certain classes of states as well which may diminish the usefulness of quantum correlation as a resource in some protocols. In this sense, we say that the weak measurements play a dual role in defining quantum correlation.
Jiang, Gang; Quan, Hong; Wang, Cheng; Gong, Qiyong
2012-12-01
In this paper, a new method of combining translation invariant (TI) and wavelet-threshold (WT) algorithm to distinguish weak and overlapping signals of proton magnetic resonance spectroscopy (1H-MRS) is presented. First, the 1H-MRS spectrum signal is transformed into wavelet domain and then its wavelet coefficients are obtained. Then, the TI method and WT method are applied to detect the weak signals overlapped by the strong ones. Through the analysis of the simulation data, we can see that both frequency and amplitude information of small-signals can be obtained accurately by the algorithm, and through the combination with the method of signal fitting, quantitative calculation of the area under weak signals peaks can be realized.
Hastings, Am; McKinley, R K; Fraser, R C
2006-05-01
This paper seeks to describe the consultation strengths and weaknesses of senior medical students, the explicit and prioritised strategies for improvement utilised in student feedback, and curriculum developments informed by this work. Prospective, descriptive study of students on clinical placements in general practice. All were observed directly by 2 assessors in consultation with 5 patients in a general practice setting. Performance was judged against 5 categories of consultation competence and 35 component competences as contained in a modified version of the Leicester Assessment Package. Specific strategies for improvement were selected from a list of 69 previously formulated strategies. Data from 1116 students were included. The consultation competences identified most frequently as strengths related to interpersonal skills, while weaknesses were mainly in the domain of clinical problem-solving. The median number of key strengths identified per student was 5, with 5 additional but lesser strengths. A median of 3 key and lesser weaknesses were identified. The average number of strategies selected to address an identified weakness was 1.2. Students rated the assessment process and its impact very positively. The systematic assessment of the consultation competence of medical students by direct observation involving real patients is feasible and facilitates the 'educational diagnosis' of individuals and of their peer group. It has informed development of teaching and generated research hypotheses.
Looking forwards and backwards: The real-time processing of Strong and Weak Crossover
Lidz, Jeffrey; Phillips, Colin
2017-01-01
We investigated the processing of pronouns in Strong and Weak Crossover constructions as a means of probing the extent to which the incremental parser can use syntactic information to guide antecedent retrieval. In Experiment 1 we show that the parser accesses a displaced wh-phrase as an antecedent for a pronoun when no grammatical constraints prohibit binding, but the parser ignores the same wh-phrase when it stands in a Strong Crossover relation to the pronoun. These results are consistent with two possibilities. First, the parser could apply Principle C at antecedent retrieval to exclude the wh-phrase on the basis of the c-command relation between its gap and the pronoun. Alternatively, retrieval might ignore any phrases that do not occupy an Argument position. Experiment 2 distinguished between these two possibilities by testing antecedent retrieval under Weak Crossover. In Weak Crossover binding of the pronoun is ruled out by the argument condition, but not Principle C. The results of Experiment 2 indicate that antecedent retrieval accesses matching wh-phrases in Weak Crossover configurations. On the basis of these findings we conclude that the parser can make rapid use of Principle C and c-command information to constrain retrieval. We discuss how our results support a view of antecedent retrieval that integrates inferences made over unseen syntactic structure into constraints on backward-looking processes like memory retrieval. PMID:28936483
Fujisaki, Ikuko; Rice, Kenneth G.; Pearlstine, Leonard G.; Mazzotti, Frank J.
2009-01-01
Feeding opportunities of American alligators (Alligator mississippiensis) in freshwater wetlands in south Florida are closely linked to hydrologic conditions. In the Everglades, seasonally and annually fluctuating surface water levels affect populations of aquatic organisms that alligators consume. Since prey becomes more concentrated when water depth decreases, we hypothesized an inverse relationship between body condition and water depth in the Everglades. On average, condition of adult alligators in the dry season was significantly higher than in the wet season, but this was not the case for juveniles/subadults. The correlation between body condition and measured water depth at capture locations was weak; however, there was a significant negative correlation between the condition and predicted water depth prior to capture for all animals except for spring juveniles/subadults which had a weak positive condition-water depth relationship. Overall, a relatively strong inverse correlation occurred at 10-49 days prior to the capture day, suggesting that current body condition of alligators may depend on feeding opportunities during that period. Fitted regression of body condition on water depth (mean depth of 10 days when condition-water depth correlation was greatest) resulted in a significantly negative slope, except for spring adult females and spring juveniles/subadults for which slopes were not significantly different from zero. Our results imply that water management practices may be critical for alligators in the Everglades since water depth can affect animal condition in a relatively short period of time.
Koren, Hila; Kaminer, Ido
2016-01-01
Widely used information diffusion models such as Independent Cascade Model, Susceptible Infected Recovered (SIR) and others fail to acknowledge that information is constantly subject to modification. Some aspects of information diffusion are best explained by network structural characteristics while in some cases strong influence comes from individual decisions. We introduce reinvention, the ability to modify information, as an individual level decision that affects the diffusion process as a whole. Based on a combination of constructs from the Diffusion of Innovations and the Critical Mass Theories, the present study advances the CMS (consume, modify, share) model which accounts for the interplay between network structure and human behavior and interactions. The model's building blocks include processes leading up to and following the formation of a critical mass of information adopters and disseminators. We examine the formation of an inflection point, information reach, sustainability of the diffusion process and collective value creation. The CMS model is tested on two directed networks and one undirected network, assuming weak or strong ties and applying constant and relative modification schemes. While all three networks are designed for disseminating new knowledge they differ in structural properties. Our findings suggest that modification enhances the diffusion of information in networks that support undirected connections and carries the biggest effect when information is shared via weak ties. Rogers' diffusion model and traditional information contagion models are fine tuned. Our results show that modifications not only contribute to a sustainable diffusion process, but also aid information in reaching remote areas of the network. The results point to the importance of cultivating weak ties, allowing reciprocal interaction among nodes and supporting the modification of information in promoting diffusion processes. These results have theoretical and practical implications for designing networks aimed at accelerating the creation and diffusion of information. PMID:27798636
Koren, Hila; Kaminer, Ido; Raban, Daphne Ruth
2016-01-01
Widely used information diffusion models such as Independent Cascade Model, Susceptible Infected Recovered (SIR) and others fail to acknowledge that information is constantly subject to modification. Some aspects of information diffusion are best explained by network structural characteristics while in some cases strong influence comes from individual decisions. We introduce reinvention, the ability to modify information, as an individual level decision that affects the diffusion process as a whole. Based on a combination of constructs from the Diffusion of Innovations and the Critical Mass Theories, the present study advances the CMS (consume, modify, share) model which accounts for the interplay between network structure and human behavior and interactions. The model's building blocks include processes leading up to and following the formation of a critical mass of information adopters and disseminators. We examine the formation of an inflection point, information reach, sustainability of the diffusion process and collective value creation. The CMS model is tested on two directed networks and one undirected network, assuming weak or strong ties and applying constant and relative modification schemes. While all three networks are designed for disseminating new knowledge they differ in structural properties. Our findings suggest that modification enhances the diffusion of information in networks that support undirected connections and carries the biggest effect when information is shared via weak ties. Rogers' diffusion model and traditional information contagion models are fine tuned. Our results show that modifications not only contribute to a sustainable diffusion process, but also aid information in reaching remote areas of the network. The results point to the importance of cultivating weak ties, allowing reciprocal interaction among nodes and supporting the modification of information in promoting diffusion processes. These results have theoretical and practical implications for designing networks aimed at accelerating the creation and diffusion of information.
Dynamics of coupled mode solitons in bursting neural networks
NASA Astrophysics Data System (ADS)
Nfor, N. Oma; Ghomsi, P. Guemkam; Moukam Kakmeni, F. M.
2018-02-01
Using an electrically coupled chain of Hindmarsh-Rose neural models, we analytically derived the nonlinearly coupled complex Ginzburg-Landau equations. This is realized by superimposing the lower and upper cutoff modes of wave propagation and by employing the multiple scale expansions in the semidiscrete approximation. We explore the modified Hirota method to analytically obtain the bright-bright pulse soliton solutions of our nonlinearly coupled equations. With these bright solitons as initial conditions of our numerical scheme, and knowing that electrical signals are the basis of information transfer in the nervous system, it is found that prior to collisions at the boundaries of the network, neural information is purely conveyed by bisolitons at lower cutoff mode. After collision, the bisolitons are completely annihilated and neural information is now relayed by the upper cutoff mode via the propagation of plane waves. It is also shown that the linear gain of the system is inextricably linked to the complex physiological mechanisms of ion mobility, since the speeds and spatial profiles of the coupled nerve impulses vary with the gain. A linear stability analysis performed on the coupled system mainly confirms the instability of plane waves in the neural network, with a glaring example of the transition of weak plane waves into a dark soliton and then static kinks. Numerical simulations have confirmed the annihilation phenomenon subsequent to collision in neural systems. They equally showed that the symmetry breaking of the pulse solution of the system leaves in the network static internal modes, sometime referred to as Goldstone modes.
Dynamics of coupled mode solitons in bursting neural networks.
Nfor, N Oma; Ghomsi, P Guemkam; Moukam Kakmeni, F M
2018-02-01
Using an electrically coupled chain of Hindmarsh-Rose neural models, we analytically derived the nonlinearly coupled complex Ginzburg-Landau equations. This is realized by superimposing the lower and upper cutoff modes of wave propagation and by employing the multiple scale expansions in the semidiscrete approximation. We explore the modified Hirota method to analytically obtain the bright-bright pulse soliton solutions of our nonlinearly coupled equations. With these bright solitons as initial conditions of our numerical scheme, and knowing that electrical signals are the basis of information transfer in the nervous system, it is found that prior to collisions at the boundaries of the network, neural information is purely conveyed by bisolitons at lower cutoff mode. After collision, the bisolitons are completely annihilated and neural information is now relayed by the upper cutoff mode via the propagation of plane waves. It is also shown that the linear gain of the system is inextricably linked to the complex physiological mechanisms of ion mobility, since the speeds and spatial profiles of the coupled nerve impulses vary with the gain. A linear stability analysis performed on the coupled system mainly confirms the instability of plane waves in the neural network, with a glaring example of the transition of weak plane waves into a dark soliton and then static kinks. Numerical simulations have confirmed the annihilation phenomenon subsequent to collision in neural systems. They equally showed that the symmetry breaking of the pulse solution of the system leaves in the network static internal modes, sometime referred to as Goldstone modes.
Prospective regularization design in prior-image-based reconstruction
NASA Astrophysics Data System (ADS)
Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2015-12-01
Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in phantoms where the optimal parameters vary spatially by an order of magnitude or more. In a series of studies designed to explore potential unknowns associated with accurate PIBR, optimal prior image strength was found to vary with attenuation differences associated with anatomical change but exhibited only small variations as a function of the shape and size of the change. The results suggest that, given a target change attenuation, prospective patient-, change-, and data-specific customization of the prior image strength can be performed to ensure reliable reconstruction of specific anatomical changes.
The influence of preburial insect access on the decomposition rate.
Bachmann, Jutta; Simmons, Tal
2010-07-01
This study compared total body score (TBS) in buried remains (35 cm depth) with and without insect access prior to burial. Sixty rabbit carcasses were exhumed at 50 accumulated degree day (ADD) intervals. Weight loss, TBS, intra-abdominal decomposition, carcass/soil interface temperature, and below-carcass soil pH were recorded and analyzed. Results showed significant differences (p < 0.001) in decomposition rates between carcasses with and without insect access prior to burial. An approximately 30% enhanced decomposition rate with insects was observed. TBS was the most valid tool in postmortem interval (PMI) estimation. All other variables showed only weak relationships to decomposition stages, adding little value to PMI estimation. Although progress in estimating the PMI for surface remains has been made, no previous studies have accomplished this for buried remains. This study builds a framework to which further comparable studies can contribute, to produce predictive models for PMI estimation in buried human remains.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., the parties may submit any additional relevant information relating to the violation, either prior to... to submit additional information or request a safety and health conference with the District Manager... parties to discuss the issues involved prior to the conference. (d) MSHA will consider all relevant...
6 CFR 5.46 - Procedure when response to demand is required prior to receiving instructions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 6 Domestic Security 1 2010-01-01 2010-01-01 false Procedure when response to demand is required prior to receiving instructions. 5.46 Section 5.46 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY DISCLOSURE OF RECORDS AND INFORMATION Disclosure of Information in Litigation § 5...
Is Bayesian Estimation Proper for Estimating the Individual's Ability? Research Report 80-3.
ERIC Educational Resources Information Center
Samejima, Fumiko
The effect of prior information in Bayesian estimation is considered, mainly from the standpoint of objective testing. In the estimation of a parameter belonging to an individual, the prior information is, in most cases, the density function of the population to which the individual belongs. Bayesian estimation was compared with maximum likelihood…
Small-Sample Equating with Prior Information. Research Report. ETS RR-09-25
ERIC Educational Resources Information Center
Livingston, Samuel A.; Lewis, Charles
2009-01-01
This report proposes an empirical Bayes approach to the problem of equating scores on test forms taken by very small numbers of test takers. The equated score is estimated separately at each score point, making it unnecessary to model either the score distribution or the equating transformation. Prior information comes from equatings of other…
Freedom of Information Act: FOIA Task Force Report
FOIA Task Force review of any significant weaknesses, and recommendation for improvements of efficiency and effectiveness of the agency's FOIA operations to ensure that information is provided to the Amercian public in a timely fashion.
Reconnaissance-Pul - Seeking the Path of Least Resistance
1990-12-15
Carl von Clausewitz, the great eighteenth century military theorist, also professed pitting friendly strength against enemy weakness. Addressing "the...izortance of reconnaissance, its relation to intelligence, and the advantage of pitting friendly strength against enemy weaknesses. The Soviets use a...64. Ibid. 65. Vasily Gerasimovich Reznichenko, "Taktika" (Tactics), translated by Foreign Broadcast Information Service (Moscow, 1988):p.55. 66
Hubble confirms cosmic acceleration with weak lensing
2017-12-08
NASA/ESA Hubble Release Date: March 25, 2010 This image shows a smoothed reconstruction of the total (mostly dark) matter distribution in the COSMOS field, created from data taken by the NASA/ESA Hubble Space Telescope and ground-based telescopes. It was inferred from the weak gravitational lensing distortions that are imprinted onto the shapes of background galaxies. The colour coding indicates the distance of the foreground mass concentrations as gathered from the weak lensing effect. Structures shown in white, cyan, and green are typically closer to us than those indicated in orange and red. To improve the resolution of the map, data from galaxies both with and without redshift information were used. The new study presents the most comprehensive analysis of data from the COSMOS survey. The researchers have, for the first time ever, used Hubble and the natural "weak lenses" in space to characterise the accelerated expansion of the Universe. Credit: NASA, ESA, P. Simon (University of Bonn) and T. Schrabback (Leiden Observatory) To learn more abou this image go to: www.spacetelescope.org/news/html/heic1005.html For more information about Goddard Space Flight Center go here: www.nasa.gov/centers/goddard/home/index.html
Rowe, Heather; Fisher, Jane; Quinlivan, Julie
2009-03-01
Prenatal maternal serum screening allows assessment of risk of chromosomal abnormalities in the fetus and is increasingly being offered to all women regardless of age or prior risk. However ensuring informed choice to participate in screening is difficult and the psychological implications of making an informed decision are uncertain. The aim of this study was to compare the growth of maternal-fetal emotional attachment in groups of women whose decisions about participation in screening were informed or not informed. A prospective longitudinal design was used. English speaking women were recruited in antenatal clinics prior to the offer of second trimester maternal screening. Three self-report questionnaires completed over the course of pregnancy used validated measures of informed choice and maternal-fetal emotional attachment. Attachment scores throughout pregnancy in informed and not-informed groups were compared in repeated measures analysis. 134 completed the first assessment (recruitment 73%) and 68 (58%) provided compete data. The informed group had significantly lower attachment scores (p = 0.023) than the not-informed group prior to testing, but scores were similar (p = 0.482) after test results were known. The findings raise questions about the impact of delayed maternal-fetal attachment and appropriate interventions to facilitate informed choice to participate in screening.
Barba-Montoya, Jose; Dos Reis, Mario; Yang, Ziheng
2017-09-01
Fossil calibrations are the utmost source of information for resolving the distances between molecular sequences into estimates of absolute times and absolute rates in molecular clock dating analysis. The quality of calibrations is thus expected to have a major impact on divergence time estimates even if a huge amount of molecular data is available. In Bayesian molecular clock dating, fossil calibration information is incorporated in the analysis through the prior on divergence times (the time prior). Here, we evaluate three strategies for converting fossil calibrations (in the form of minimum- and maximum-age bounds) into the prior on times, which differ according to whether they borrow information from the maximum age of ancestral nodes and minimum age of descendent nodes to form constraints for any given node on the phylogeny. We study a simple example that is analytically tractable, and analyze two real datasets (one of 10 primate species and another of 48 seed plant species) using three Bayesian dating programs: MCMCTree, MrBayes and BEAST2. We examine how different calibration strategies, the birth-death process, and automatic truncation (to enforce the constraint that ancestral nodes are older than descendent nodes) interact to determine the time prior. In general, truncation has a great impact on calibrations so that the effective priors on the calibration node ages after the truncation can be very different from the user-specified calibration densities. The different strategies for generating the effective prior also had considerable impact, leading to very different marginal effective priors. Arbitrary parameters used to implement minimum-bound calibrations were found to have a strong impact upon the prior and posterior of the divergence times. Our results highlight the importance of inspecting the joint time prior used by the dating program before any Bayesian dating analysis. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
2008-03-01
amount of arriving data, extract actionable information, and integrate it with prior knowledge. Add to that the pressures of today’s fusion center...information, and integrate it with prior knowledge. Add to that the pressures of today’s fusion center climate and it becomes clear that analysts, police... fusion centers, including specifics about how these problems manifest at the Illinois State Police (ISP) Statewide Terrorism and Intelligence Center
Roberts, J Scott; Gornick, Michele C; Carere, Deanna Alexis; Uhlmann, Wendy R; Ruffin, Mack T; Green, Robert C
2017-01-01
To describe the interests, decision making, and responses of consumers of direct-to-consumer personal genomic testing (DTC-PGT) services. Prior to 2013 regulatory restrictions on DTC-PGT services, 1,648 consumers from 2 leading companies completed Web surveys before and after receiving test results. Prior to testing, DTC-PGT consumers were as interested in ancestry (74% very interested) and trait information (72%) as they were in disease risks (72%). Among disease risks, heart disease (68% very interested), breast cancer (67%), and Alzheimer disease (66%) were of greatest interest prior to testing. Interest in disease risks was associated with female gender and poorer self-reported health (p < 0.01). Many consumers (38%) did not consider the possibility of unwanted information before purchasing services; this group was more likely to be older, male, and less educated (p < 0.05). After receiving results, 59% of respondents said test information would influence management of their health; 2% reported regret about seeking testing and 1% reported harm from results. DTC-PGT has attracted controversy because of the health-related information it provides, but nonmedical information is of equal or greater interest to consumers. Although many consumers did not fully consider potential risks prior to testing, DTC-PGT was generally perceived as useful in informing future health decisions. © 2017 S. Karger AG, Basel.
Praveen, Paurush; Fröhlich, Holger
2013-01-01
Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available.
Joint weak value for all order coupling using continuous variable and qubit probe
NASA Astrophysics Data System (ADS)
Kumari, Asmita; Pan, Alok Kumar; Panigrahi, Prasanta K.
2017-11-01
The notion of weak measurement in quantum mechanics has gained a significant and wide interest in realizing apparently counterintuitive quantum effects. In recent times, several theoretical and experimental works have been reported for demonstrating the joint weak value of two observables where the coupling strength is restricted to the second order. In this paper, we extend such a formulation by providing a complete treatment of joint weak measurement scenario for all-order-coupling for the observable satisfying A 2 = 𝕀 and A 2 = A, which allows us to reveal several hitherto unexplored features. By considering the probe state to be discrete as well as continuous variable, we demonstrate how the joint weak value can be inferred for any given strength of the coupling. A particularly interesting result we pointed out that even if the initial pointer state is uncorrelated, the single pointer displacement can provide the information about the joint weak value, if at least third order of the coupling is taken into account. As an application of our scheme, we provide an all-order-coupling treatment of the well-known Hardy paradox by considering the continuous as well as discrete meter states and show how the negative joint weak probabilities emerge in the quantum paradoxes at the weak coupling limit.
NASA Astrophysics Data System (ADS)
Park, Gilsoon; Hong, Jinwoo; Lee, Jong-Min
2018-03-01
In human brain, Corpus Callosum (CC) is the largest white matter structure, connecting between right and left hemispheres. Structural features such as shape and size of CC in midsagittal plane are of great significance for analyzing various neurological diseases, for example Alzheimer's disease, autism and epilepsy. For quantitative and qualitative studies of CC in brain MR images, robust segmentation of CC is important. In this paper, we present a novel method for CC segmentation. Our approach is based on deep neural networks and the prior information generated from multi-atlas images. Deep neural networks have recently shown good performance in various image processing field. Convolutional neural networks (CNN) have shown outstanding performance for classification and segmentation in medical image fields. We used convolutional neural networks for CC segmentation. Multi-atlas based segmentation model have been widely used in medical image segmentation because atlas has powerful information about the target structure we want to segment, consisting of MR images and corresponding manual segmentation of the target structure. We combined the prior information, such as location and intensity distribution of target structure (i.e. CC), made from multi-atlas images in CNN training process for more improving training. The CNN with prior information showed better segmentation performance than without.
Effects of Prior Knowledge on Memory: Implications for Education
ERIC Educational Resources Information Center
Shing, Yee Lee; Brod, Garvin
2016-01-01
The encoding, consolidation, and retrieval of events and facts form the basis for acquiring new skills and knowledge. Prior knowledge can enhance those memory processes considerably and thus foster knowledge acquisition. But prior knowledge can also hinder knowledge acquisition, in particular when the to-be-learned information is inconsistent with…
Menarche: Prior Knowledge and Experience.
ERIC Educational Resources Information Center
Skandhan, K. P.; And Others
1988-01-01
Recorded menstruation information among 305 young women in India, assessing the differences between those who did and did not have knowledge of menstruation prior to menarche. Those with prior knowledge considered menarche to be a normal physiological function and had a higher rate of regularity, lower rate of dysmenorrhea, and earlier onset of…
Inferring metabolic networks using the Bayesian adaptive graphical lasso with informative priors.
Peterson, Christine; Vannucci, Marina; Karakas, Cemal; Choi, William; Ma, Lihua; Maletić-Savatić, Mirjana
2013-10-01
Metabolic processes are essential for cellular function and survival. We are interested in inferring a metabolic network in activated microglia, a major neuroimmune cell in the brain responsible for the neuroinflammation associated with neurological diseases, based on a set of quantified metabolites. To achieve this, we apply the Bayesian adaptive graphical lasso with informative priors that incorporate known relationships between covariates. To encourage sparsity, the Bayesian graphical lasso places double exponential priors on the off-diagonal entries of the precision matrix. The Bayesian adaptive graphical lasso allows each double exponential prior to have a unique shrinkage parameter. These shrinkage parameters share a common gamma hyperprior. We extend this model to create an informative prior structure by formulating tailored hyperpriors on the shrinkage parameters. By choosing parameter values for each hyperprior that shift probability mass toward zero for nodes that are close together in a reference network, we encourage edges between covariates with known relationships. This approach can improve the reliability of network inference when the sample size is small relative to the number of parameters to be estimated. When applied to the data on activated microglia, the inferred network includes both known relationships and associations of potential interest for further investigation.
Inferring metabolic networks using the Bayesian adaptive graphical lasso with informative priors
PETERSON, CHRISTINE; VANNUCCI, MARINA; KARAKAS, CEMAL; CHOI, WILLIAM; MA, LIHUA; MALETIĆ-SAVATIĆ, MIRJANA
2014-01-01
Metabolic processes are essential for cellular function and survival. We are interested in inferring a metabolic network in activated microglia, a major neuroimmune cell in the brain responsible for the neuroinflammation associated with neurological diseases, based on a set of quantified metabolites. To achieve this, we apply the Bayesian adaptive graphical lasso with informative priors that incorporate known relationships between covariates. To encourage sparsity, the Bayesian graphical lasso places double exponential priors on the off-diagonal entries of the precision matrix. The Bayesian adaptive graphical lasso allows each double exponential prior to have a unique shrinkage parameter. These shrinkage parameters share a common gamma hyperprior. We extend this model to create an informative prior structure by formulating tailored hyperpriors on the shrinkage parameters. By choosing parameter values for each hyperprior that shift probability mass toward zero for nodes that are close together in a reference network, we encourage edges between covariates with known relationships. This approach can improve the reliability of network inference when the sample size is small relative to the number of parameters to be estimated. When applied to the data on activated microglia, the inferred network includes both known relationships and associations of potential interest for further investigation. PMID:24533172
Petit, Caroline; Samson, Adeline; Morita, Satoshi; Ursino, Moreno; Guedj, Jérémie; Jullien, Vincent; Comets, Emmanuelle; Zohar, Sarah
2018-06-01
The number of trials conducted and the number of patients per trial are typically small in paediatric clinical studies. This is due to ethical constraints and the complexity of the medical process for treating children. While incorporating prior knowledge from adults may be extremely valuable, this must be done carefully. In this paper, we propose a unified method for designing and analysing dose-finding trials in paediatrics, while bridging information from adults. The dose-range is calculated under three extrapolation options, linear, allometry and maturation adjustment, using adult pharmacokinetic data. To do this, it is assumed that target exposures are the same in both populations. The working model and prior distribution parameters of the dose-toxicity and dose-efficacy relationships are obtained using early-phase adult toxicity and efficacy data at several dose levels. Priors are integrated into the dose-finding process through Bayesian model selection or adaptive priors. This calibrates the model to adjust for misspecification, if the adult and pediatric data are very different. We performed a simulation study which indicates that incorporating prior adult information in this way may improve dose selection in children.
Automatic face naming by learning discriminative affinity matrices from weakly labeled images.
Xiao, Shijie; Xu, Dong; Wu, Jianxin
2015-10-01
Given a collection of images, where each image contains several faces and is associated with a few names in the corresponding caption, the goal of face naming is to infer the correct name for each face. In this paper, we propose two new methods to effectively solve this problem by learning two discriminative affinity matrices from these weakly labeled images. We first propose a new method called regularized low-rank representation by effectively utilizing weakly supervised information to learn a low-rank reconstruction coefficient matrix while exploring multiple subspace structures of the data. Specifically, by introducing a specially designed regularizer to the low-rank representation method, we penalize the corresponding reconstruction coefficients related to the situations where a face is reconstructed by using face images from other subjects or by using itself. With the inferred reconstruction coefficient matrix, a discriminative affinity matrix can be obtained. Moreover, we also develop a new distance metric learning method called ambiguously supervised structural metric learning by using weakly supervised information to seek a discriminative distance metric. Hence, another discriminative affinity matrix can be obtained using the similarity matrix (i.e., the kernel matrix) based on the Mahalanobis distances of the data. Observing that these two affinity matrices contain complementary information, we further combine them to obtain a fused affinity matrix, based on which we develop a new iterative scheme to infer the name of each face. Comprehensive experiments demonstrate the effectiveness of our approach.
Development of the National Health Information Systems in Botswana: Pitfalls, prospects and lessons.
Seitio-Kgokgwe, Onalenna; Gauld, Robin D C; Hill, Philip C; Barnett, Pauline
2015-01-01
Studies evaluating development of health information systems in developing countries are limited. Most of the available studies are based on pilot projects or cross-sectional studies. We took a longitudinal approach to analysing the development of Botswana's health information systems. We aimed to: (i) trace the development of the national health information systems in Botswana (ii) identify pitfalls during development and prospects that could be maximized to strengthen the system; and (iii) draw lessons for Botswana and other countries working on establishing or improving their health information systems. This article is based on data collected through document analysis and key informant interviews with policy makers, senior managers and staff of the Ministry of Health and senior officers from various stakeholder organizations. Lack of central coordination, weak leadership, weak policy and regulatory frameworks, and inadequate resources limited development of the national health information systems in Botswana. Lack of attention to issues of organizational structure is one of the major pitfalls. The ongoing reorganization of the Ministry of Health provides opportunity to reposition the health information system function. The current efforts including development of the health information management policy and plan could enhance the health information management system.
NASA Astrophysics Data System (ADS)
Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian
2017-07-01
Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.
NASA Astrophysics Data System (ADS)
Caticha, Ariel
2007-11-01
What is information? Is it physical? We argue that in a Bayesian theory the notion of information must be defined in terms of its effects on the beliefs of rational agents. Information is whatever constrains rational beliefs and therefore it is the force that induces us to change our minds. This problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), which is designed for updating from arbitrary priors given information in the form of arbitrary constraints, includes as special cases both MaxEnt (which allows arbitrary constraints) and Bayes' rule (which allows arbitrary priors). Thus, ME unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme that allows us to handle problems that lie beyond the reach of either of the two methods separately. I conclude with a couple of simple illustrative examples.
Bayesian sample size calculations in phase II clinical trials using a mixture of informative priors.
Gajewski, Byron J; Mayo, Matthew S
2006-08-15
A number of researchers have discussed phase II clinical trials from a Bayesian perspective. A recent article by Mayo and Gajewski focuses on sample size calculations, which they determine by specifying an informative prior distribution and then calculating a posterior probability that the true response will exceed a prespecified target. In this article, we extend these sample size calculations to include a mixture of informative prior distributions. The mixture comes from several sources of information. For example consider information from two (or more) clinicians. The first clinician is pessimistic about the drug and the second clinician is optimistic. We tabulate the results for sample size design using the fact that the simple mixture of Betas is a conjugate family for the Beta- Binomial model. We discuss the theoretical framework for these types of Bayesian designs and show that the Bayesian designs in this paper approximate this theoretical framework. Copyright 2006 John Wiley & Sons, Ltd.
Network inference using informative priors
Mukherjee, Sach; Speed, Terence P.
2008-01-01
Recent years have seen much interest in the study of systems characterized by multiple interacting components. A class of statistical models called graphical models, in which graphs are used to represent probabilistic relationships between variables, provides a framework for formal inference regarding such systems. In many settings, the object of inference is the network structure itself. This problem of “network inference” is well known to be a challenging one. However, in scientific settings there is very often existing information regarding network connectivity. A natural idea then is to take account of such information during inference. This article addresses the question of incorporating prior information into network inference. We focus on directed models called Bayesian networks, and use Markov chain Monte Carlo to draw samples from posterior distributions over network structures. We introduce prior distributions on graphs capable of capturing information regarding network features including edges, classes of edges, degree distributions, and sparsity. We illustrate our approach in the context of systems biology, applying our methods to network inference in cancer signaling. PMID:18799736
Network inference using informative priors.
Mukherjee, Sach; Speed, Terence P
2008-09-23
Recent years have seen much interest in the study of systems characterized by multiple interacting components. A class of statistical models called graphical models, in which graphs are used to represent probabilistic relationships between variables, provides a framework for formal inference regarding such systems. In many settings, the object of inference is the network structure itself. This problem of "network inference" is well known to be a challenging one. However, in scientific settings there is very often existing information regarding network connectivity. A natural idea then is to take account of such information during inference. This article addresses the question of incorporating prior information into network inference. We focus on directed models called Bayesian networks, and use Markov chain Monte Carlo to draw samples from posterior distributions over network structures. We introduce prior distributions on graphs capable of capturing information regarding network features including edges, classes of edges, degree distributions, and sparsity. We illustrate our approach in the context of systems biology, applying our methods to network inference in cancer signaling.
Winograd, Michael R; Rosenfeld, J Peter
2014-12-01
In P300-Concealed Information Tests used with mock crime scenarios, the amount of detail revealed to a participant prior to the commission of the mock crime can have a serious impact on a study's validity. We predicted that exposure to crime details through instructions would bias detection rates toward enhanced sensitivity. In a 2 × 2 factorial design, participants were either informed (through mock crime instructions) or naïve as to the identity of a to-be-stolen item, and then either committed (guilty) or did not commit (innocent) the crime. Results showed that prior knowledge of the stolen item was sufficient to cause 69% of innocent-informed participants to be incorrectly classified as guilty. Further, we found a trend toward enhanced detection rate for guilty-informed participants over guilty-naïve participants. Results suggest that revealing details to participants through instructions biases detection rates in the P300-CIT toward enhanced sensitivity. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Cucchi, K.; Kawa, N.; Hesse, F.; Rubin, Y.
2017-12-01
In order to reduce uncertainty in the prediction of subsurface flow and transport processes, practitioners should use all data available. However, classic inverse modeling frameworks typically only make use of information contained in in-situ field measurements to provide estimates of hydrogeological parameters. Such hydrogeological information about an aquifer is difficult and costly to acquire. In this data-scarce context, the transfer of ex-situ information coming from previously investigated sites can be critical for improving predictions by better constraining the estimation procedure. Bayesian inverse modeling provides a coherent framework to represent such ex-situ information by virtue of the prior distribution and combine them with in-situ information from the target site. In this study, we present an innovative data-driven approach for defining such informative priors for hydrogeological parameters at the target site. Our approach consists in two steps, both relying on statistical and machine learning methods. The first step is data selection; it consists in selecting sites similar to the target site. We use clustering methods for selecting similar sites based on observable hydrogeological features. The second step is data assimilation; it consists in assimilating data from the selected similar sites into the informative prior. We use a Bayesian hierarchical model to account for inter-site variability and to allow for the assimilation of multiple types of site-specific data. We present the application and validation of the presented methods on an established database of hydrogeological parameters. Data and methods are implemented in the form of an open-source R-package and therefore facilitate easy use by other practitioners.
Hommes, J; Rienties, B; de Grave, W; Bos, G; Schuwirth, L; Scherpbier, A
2012-12-01
World-wide, universities in health sciences have transformed their curriculum to include collaborative learning and facilitate the students' learning process. Interaction has been acknowledged to be the synergistic element in this learning context. However, students spend the majority of their time outside their classroom and interaction does not stop outside the classroom. Therefore we studied how informal social interaction influences student learning. Moreover, to explore what really matters in the students learning process, a model was tested how the generally known important constructs-prior performance, motivation and social integration-relate to informal social interaction and student learning. 301 undergraduate medical students participated in this cross-sectional quantitative study. Informal social interaction was assessed using self-reported surveys following the network approach. Students' individual motivation, social integration and prior performance were assessed by the Academic Motivation Scale, the College Adaption Questionnaire and students' GPA respectively. A factual knowledge test represented student' learning. All social networks were positively associated with student learning significantly: friendships (β = 0.11), providing information to other students (β = 0.16), receiving information from other students (β = 0.25). Structural equation modelling revealed a model in which social networks increased student learning (r = 0.43), followed by prior performance (r = 0.31). In contrast to prior literature, students' academic motivation and social integration were not associated with students' learning. Students' informal social interaction is strongly associated with students' learning. These findings underline the need to change our focus from the formal context (classroom) to the informal context to optimize student learning and deliver modern medics.
Horodecki, Michał; Oppenheim, Jonathan; Winter, Andreas
2005-08-04
Information--be it classical or quantum--is measured by the amount of communication needed to convey it. In the classical case, if the receiver has some prior information about the messages being conveyed, less communication is needed. Here we explore the concept of prior quantum information: given an unknown quantum state distributed over two systems, we determine how much quantum communication is needed to transfer the full state to one system. This communication measures the partial information one system needs, conditioned on its prior information. We find that it is given by the conditional entropy--a quantity that was known previously, but lacked an operational meaning. In the classical case, partial information must always be positive, but we find that in the quantum world this physical quantity can be negative. If the partial information is positive, its sender needs to communicate this number of quantum bits to the receiver; if it is negative, then sender and receiver instead gain the corresponding potential for future quantum communication. We introduce a protocol that we term 'quantum state merging' which optimally transfers partial information. We show how it enables a systematic understanding of quantum network theory, and discuss several important applications including distributed compression, noiseless coding with side information, multiple access channels and assisted entanglement distillation.
Implicit Priors in Galaxy Cluster Mass and Scaling Relation Determinations
NASA Technical Reports Server (NTRS)
Mantz, A.; Allen, S. W.
2011-01-01
Deriving the total masses of galaxy clusters from observations of the intracluster medium (ICM) generally requires some prior information, in addition to the assumptions of hydrostatic equilibrium and spherical symmetry. Often, this information takes the form of particular parametrized functions used to describe the cluster gas density and temperature profiles. In this paper, we investigate the implicit priors on hydrostatic masses that result from this fully parametric approach, and the implications of such priors for scaling relations formed from those masses. We show that the application of such fully parametric models of the ICM naturally imposes a prior on the slopes of the derived scaling relations, favoring the self-similar model, and argue that this prior may be influential in practice. In contrast, this bias does not exist for techniques which adopt an explicit prior on the form of the mass profile but describe the ICM non-parametrically. Constraints on the slope of the cluster mass-temperature relation in the literature show a separation based the approach employed, with the results from fully parametric ICM modeling clustering nearer the self-similar value. Given that a primary goal of scaling relation analyses is to test the self-similar model, the application of methods subject to strong, implicit priors should be avoided. Alternative methods and best practices are discussed.
Districts Gear up for Shift to Informational Texts
ERIC Educational Resources Information Center
Gewertz, Catherine
2012-01-01
The Common Core State Standards' emphasis on informational text arose in part from research suggesting that employers and college instructors found students weak at comprehending technical manuals, scientific and historical journals, and other texts pivotal to work in those arenas. The common core's vision of informational text includes literary…
Goense, J B M; Ratnam, R
2003-10-01
An important problem in sensory processing is deciding whether fluctuating neural activity encodes a stimulus or is due to variability in baseline activity. Neurons that subserve detection must examine incoming spike trains continuously, and quickly and reliably differentiate signals from baseline activity. Here we demonstrate that a neural integrator can perform continuous signal detection, with performance exceeding that of trial-based procedures, where spike counts in signal- and baseline windows are compared. The procedure was applied to data from electrosensory afferents of weakly electric fish (Apteronotus leptorhynchus), where weak perturbations generated by small prey add approximately 1 spike to a baseline of approximately 300 spikes s(-1). The hypothetical postsynaptic neuron, modeling an electrosensory lateral line lobe cell, could detect an added spike within 10-15 ms, achieving near ideal detection performance (80-95%) at false alarm rates of 1-2 Hz, while trial-based testing resulted in only 30-35% correct detections at that false alarm rate. The performance improvement was due to anti-correlations in the afferent spike train, which reduced both the amplitude and duration of fluctuations in postsynaptic membrane activity, and so decreased the number of false alarms. Anti-correlations can be exploited to improve detection performance only if there is memory of prior decisions.
Oral carbohydrate supplementation reduces preoperative discomfort in laparoscopic cholecystectomy.
Yildiz, Huseyin; Gunal, Solmaz Eruyar; Yilmaz, Gulsen; Yucel, Safak
2013-04-01
The aim of this study was to investigate the effects of oral carbohydrate solution (CHO) on perioperative discomfort, biochemistry, hemodynamics, and patient satisfaction in elective surgery patients under general anesthesia. Sixty cases in ASA I-II group who were planned to have operation under general anesthesia were included in the study. The cases were randomly divided into two groups having 30 subjects in each. The patients in the study group were given CHO in the evening prior to the surgery and 2-3 hr before the anesthesia while routine fasting was applied in the control group. In the study group; 2-3 hr before the surgery; malaise, thirst, hunger, and weakness; just before the surgery malaise, thirst, hunger, and fatigue; 2 hr after the operation thirst, hunger, weakness, and concentration difficulty; 24 hr after the operation malaise and weakness were found significantly lower. Fasting blood glucose (FBG) level was found to be higher in the control group at the 90th min of the operation. Gastric volumes were higher in the control group; gastric pH values were found significantly higher in the study group. The level of anxiety and depression risk rate were found lower in the study group. In conclusion, preoperative CHO reduces perioperative discomfort and improves perioperative well being when compared to overnight fasting.
The impact of the rate prior on Bayesian estimation of divergence times with multiple Loci.
Dos Reis, Mario; Zhu, Tianqi; Yang, Ziheng
2014-07-01
Bayesian methods provide a powerful way to estimate species divergence times by combining information from molecular sequences with information from the fossil record. With the explosive increase of genomic data, divergence time estimation increasingly uses data of multiple loci (genes or site partitions). Widely used computer programs to estimate divergence times use independent and identically distributed (i.i.d.) priors on the substitution rates for different loci. The i.i.d. prior is problematic. As the number of loci (L) increases, the prior variance of the average rate across all loci goes to zero at the rate 1/L. As a consequence, the rate prior dominates posterior time estimates when many loci are analyzed, and if the rate prior is misspecified, the estimated divergence times will converge to wrong values with very narrow credibility intervals. Here we develop a new prior on the locus rates based on the Dirichlet distribution that corrects the problematic behavior of the i.i.d. prior. We use computer simulation and real data analysis to highlight the differences between the old and new priors. For a dataset for six primate species, we show that with the old i.i.d. prior, if the prior rate is too high (or too low), the estimated divergence times are too young (or too old), outside the bounds imposed by the fossil calibrations. In contrast, with the new Dirichlet prior, posterior time estimates are insensitive to the rate prior and are compatible with the fossil calibrations. We re-analyzed a phylogenomic data set of 36 mammal species and show that using many fossil calibrations can alleviate the adverse impact of a misspecified rate prior to some extent. We recommend the use of the new Dirichlet prior in Bayesian divergence time estimation. [Bayesian inference, divergence time, relaxed clock, rate prior, partition analysis.]. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
NASA Technical Reports Server (NTRS)
Kudryashov, B. A.; Lomovskaya, E. G.; Shapiro, F. B.; Lyapina, L. Y.
1980-01-01
Total non-enzymatic fibrinolytic activity in the blood of rats increased three times in response to stress caused by 30 minute immobilization, and the activity of epinephrine-heparin complex increased nine times. In adrenalectomized animals, which showed a weak response to the same stress, intraperitoneal injection of hydrocortisone 30 minutes prior to immobilization normalized the response. Obtained results indicate that adrenalectomy leads to sharp reduction of heparin complexing with thromogenic proteins and epinephrine, while substitution therapy with hydrocortisone restores anticoagulation system function.
Generalized multiple kernel learning with data-dependent priors.
Mao, Qi; Tsang, Ivor W; Gao, Shenghua; Wang, Li
2015-06-01
Multiple kernel learning (MKL) and classifier ensemble are two mainstream methods for solving learning problems in which some sets of features/views are more informative than others, or the features/views within a given set are inconsistent. In this paper, we first present a novel probabilistic interpretation of MKL such that maximum entropy discrimination with a noninformative prior over multiple views is equivalent to the formulation of MKL. Instead of using the noninformative prior, we introduce a novel data-dependent prior based on an ensemble of kernel predictors, which enhances the prediction performance of MKL by leveraging the merits of the classifier ensemble. With the proposed probabilistic framework of MKL, we propose a hierarchical Bayesian model to learn the proposed data-dependent prior and classification model simultaneously. The resultant problem is convex and other information (e.g., instances with either missing views or missing labels) can be seamlessly incorporated into the data-dependent priors. Furthermore, a variety of existing MKL models can be recovered under the proposed MKL framework and can be readily extended to incorporate these priors. Extensive experiments demonstrate the benefits of our proposed framework in supervised and semisupervised settings, as well as in tasks with partial correspondence among multiple views.
ERIC Educational Resources Information Center
Marcoulides, Katerina M.
2018-01-01
This study examined the use of Bayesian analysis methods for the estimation of item parameters in a two-parameter logistic item response theory model. Using simulated data under various design conditions with both informative and non-informative priors, the parameter recovery of Bayesian analysis methods were examined. Overall results showed that…
Code of Federal Regulations, 2010 CFR
2010-04-01
... submitting the information required by such form on magnetic tape or by other media, provided that the prior... required by such form on magnetic tape or other approved media, provided that the prior consent of the Commissioner of Social Security (or other authorized officer or employee thereof) has been obtained. [T.D. 6883...
21 CFR 1.281 - What information must be in a prior notice?
Code of Federal Regulations, 2010 CFR
2010-04-01
... by truck, bus, or rail, the trip number; (v) For food arriving as containerized cargo by water, air... arrived by truck, bus, or rail, the trip number; (v) For food that arrived as containerized cargo by water... 21 Food and Drugs 1 2010-04-01 2010-04-01 false What information must be in a prior notice? 1.281...
ERIC Educational Resources Information Center
Brown, Julie
2017-01-01
This article presents an overview of the findings of a recently completed study exploring the potentially transformative impact upon learners of recognition of prior informal learning (RPL). The specific transformative dimension being reported is learner identity. In addition to providing a starting point for an evidence base within Scotland, the…
Constrained Deep Weak Supervision for Histopathology Image Segmentation.
Jia, Zhipeng; Huang, Xingyi; Chang, Eric I-Chao; Xu, Yan
2017-11-01
In this paper, we develop a new weakly supervised learning algorithm to learn to segment cancerous regions in histopathology images. This paper is under a multiple instance learning (MIL) framework with a new formulation, deep weak supervision (DWS); we also propose an effective way to introduce constraints to our neural networks to assist the learning process. The contributions of our algorithm are threefold: 1) we build an end-to-end learning system that segments cancerous regions with fully convolutional networks (FCNs) in which image-to-image weakly-supervised learning is performed; 2) we develop a DWS formulation to exploit multi-scale learning under weak supervision within FCNs; and 3) constraints about positive instances are introduced in our approach to effectively explore additional weakly supervised information that is easy to obtain and enjoy a significant boost to the learning process. The proposed algorithm, abbreviated as DWS-MIL, is easy to implement and can be trained efficiently. Our system demonstrates the state-of-the-art results on large-scale histopathology image data sets and can be applied to various applications in medical imaging beyond histopathology images, such as MRI, CT, and ultrasound images.
A shape prior-based MRF model for 3D masseter muscle segmentation
NASA Astrophysics Data System (ADS)
Majeed, Tahir; Fundana, Ketut; Lüthi, Marcel; Beinemann, Jörg; Cattin, Philippe
2012-02-01
Medical image segmentation is generally an ill-posed problem that can only be solved by incorporating prior knowledge. The ambiguities arise due to the presence of noise, weak edges, imaging artifacts, inhomogeneous interior and adjacent anatomical structures having similar intensity profile as the target structure. In this paper we propose a novel approach to segment the masseter muscle using the graph-cut incorporating additional 3D shape priors in CT datasets, which is robust to noise; artifacts; and shape deformations. The main contribution of this paper is in translating the 3D shape knowledge into both unary and pairwise potentials of the Markov Random Field (MRF). The segmentation task is casted as a Maximum-A-Posteriori (MAP) estimation of the MRF. Graph-cut is then used to obtain the global minimum which results in the segmentation of the masseter muscle. The method is tested on 21 CT datasets of the masseter muscle, which are noisy with almost all possessing mild to severe imaging artifacts such as high-density artifacts caused by e.g. the very common dental fillings and dental implants. We show that the proposed technique produces clinically acceptable results to the challenging problem of muscle segmentation, and further provide a quantitative and qualitative comparison with other methods. We statistically show that adding additional shape prior into both unary and pairwise potentials can increase the robustness of the proposed method in noisy datasets.
Two-dimensional distributed-phase-reference protocol for quantum key distribution
NASA Astrophysics Data System (ADS)
Bacco, Davide; Christensen, Jesper Bjerge; Castaneda, Mario A. Usuga; Ding, Yunhong; Forchhammer, Søren; Rottwitt, Karsten; Oxenløwe, Leif Katsuo
2016-12-01
Quantum key distribution (QKD) and quantum communication enable the secure exchange of information between remote parties. Currently, the distributed-phase-reference (DPR) protocols, which are based on weak coherent pulses, are among the most practical solutions for long-range QKD. During the last 10 years, long-distance fiber-based DPR systems have been successfully demonstrated, although fundamental obstacles such as intrinsic channel losses limit their performance. Here, we introduce the first two-dimensional DPR-QKD protocol in which information is encoded in the time and phase of weak coherent pulses. The ability of extracting two bits of information per detection event, enables a higher secret key rate in specific realistic network scenarios. Moreover, despite the use of more dimensions, the proposed protocol remains simple, practical, and fully integrable.
Two-dimensional distributed-phase-reference protocol for quantum key distribution.
Bacco, Davide; Christensen, Jesper Bjerge; Castaneda, Mario A Usuga; Ding, Yunhong; Forchhammer, Søren; Rottwitt, Karsten; Oxenløwe, Leif Katsuo
2016-12-22
Quantum key distribution (QKD) and quantum communication enable the secure exchange of information between remote parties. Currently, the distributed-phase-reference (DPR) protocols, which are based on weak coherent pulses, are among the most practical solutions for long-range QKD. During the last 10 years, long-distance fiber-based DPR systems have been successfully demonstrated, although fundamental obstacles such as intrinsic channel losses limit their performance. Here, we introduce the first two-dimensional DPR-QKD protocol in which information is encoded in the time and phase of weak coherent pulses. The ability of extracting two bits of information per detection event, enables a higher secret key rate in specific realistic network scenarios. Moreover, despite the use of more dimensions, the proposed protocol remains simple, practical, and fully integrable.
Two-dimensional distributed-phase-reference protocol for quantum key distribution
Bacco, Davide; Christensen, Jesper Bjerge; Castaneda, Mario A. Usuga; Ding, Yunhong; Forchhammer, Søren; Rottwitt, Karsten; Oxenløwe, Leif Katsuo
2016-01-01
Quantum key distribution (QKD) and quantum communication enable the secure exchange of information between remote parties. Currently, the distributed-phase-reference (DPR) protocols, which are based on weak coherent pulses, are among the most practical solutions for long-range QKD. During the last 10 years, long-distance fiber-based DPR systems have been successfully demonstrated, although fundamental obstacles such as intrinsic channel losses limit their performance. Here, we introduce the first two-dimensional DPR-QKD protocol in which information is encoded in the time and phase of weak coherent pulses. The ability of extracting two bits of information per detection event, enables a higher secret key rate in specific realistic network scenarios. Moreover, despite the use of more dimensions, the proposed protocol remains simple, practical, and fully integrable. PMID:28004821
Identifying and tracking disaster victims: state-of-the-art technology review.
Pate, Barbara L
2008-01-01
The failure of our nation to adequately track victims of Hurricane Katrina has been identified as a major weakness of national and local disaster preparedness plans. This weakness has prompted government and private industries to acknowledge that existing paper-based tracking systems are incapable of managing information during a large-scale disaster. In response to this need, efforts are under way to develop new technologies that allow instant access to identity and location information during emergency situations. The purpose of this article is to provide a review of state-of-the-art technologies, with implications and limitations for use during mass casualty incidents.
Absorption sites of orally administered drugs in the small intestine.
Murakami, Teruo
2017-12-01
In pharmacotherapy, drugs are mostly taken orally to be absorbed systemically from the small intestine, and some drugs are known to have preferential absorption sites in the small intestine. It would therefore be valuable to know the absorption sites of orally administered drugs and the influencing factors. Areas covered:In this review, the author summarizes the reported absorption sites of orally administered drugs, as well as, influencing factors and experimental techniques. Information on the main absorption sites and influencing factors can help to develop ideal drug delivery systems and more effective pharmacotherapies. Expert opinion: Various factors including: the solubility, lipophilicity, luminal concentration, pKa value, transporter substrate specificity, transporter expression, luminal fluid pH, gastrointestinal transit time, and intestinal metabolism determine the site-dependent intestinal absorption. However, most of the dissolved fraction of orally administered drugs including substrates for ABC and SLC transporters, except for some weakly basic drugs with higher pKa values, are considered to be absorbed sequentially from the proximal small intestine. Securing the solubility and stability of drugs prior to reaching to the main absorption sites and appropriate delivery rates of drugs at absorption sites are important goals for achieving effective pharmacotherapy.
Critical Appraisal Toolkit (CAT) for assessing multiple types of evidence
Moralejo, D; Ogunremi, T; Dunn, K
2017-01-01
Healthcare professionals are often expected to critically appraise research evidence in order to make recommendations for practice and policy development. Here we describe the Critical Appraisal Toolkit (CAT) currently used by the Public Health Agency of Canada. The CAT consists of: algorithms to identify the type of study design, three separate tools (for appraisal of analytic studies, descriptive studies and literature reviews), additional tools to support the appraisal process, and guidance for summarizing evidence and drawing conclusions about a body of evidence. Although the toolkit was created to assist in the development of national guidelines related to infection prevention and control, clinicians, policy makers and students can use it to guide appraisal of any health-related quantitative research. Participants in a pilot test completed a total of 101 critical appraisals and found that the CAT was user-friendly and helpful in the process of critical appraisal. Feedback from participants of the pilot test of the CAT informed further revisions prior to its release. The CAT adds to the arsenal of available tools and can be especially useful when the best available evidence comes from non-clinical trials and/or studies with weak designs, where other tools may not be easily applied. PMID:29770086
Upstream oversight assessment for agrifood nanotechnology: a case studies approach.
Kuzma, Jennifer; Romanchek, James; Kokotovich, Adam
2008-08-01
Although nanotechnology is broadly receiving attention in public and academic circles, oversight issues associated with applications for agriculture and food remain largely unexplored. Agrifood nanotechnology is at a critical stage in which informed analysis can help shape funding priorities, risk assessment, and oversight activities. This analysis is designed to help society and policymakers anticipate and prepare for challenges posed by complicated, convergent applications of agrifood nanotechnology. The goal is to identify data, risk assessment, regulatory policy, and engagement needs for overseeing these products so they can be addressed prior to market entry. Our approach, termed upstream oversight assessment (UOA), has potential as a key element of anticipatory governance. It relies on distinct case studies of proposed applications of agrifood nanotechnology to highlight areas that need study and attention. As a tool for preparation, UOA anticipates the types and features of emerging applications; their endpoints of use in society; the extent to which users, workers, ecosystems, or consumers will be exposed; the nature of the material and its safety; whether and where the technologies might fit into current regulatory system(s); the strengths and weaknesses of the system(s) in light of these novel applications; and the possible social concerns related to oversight for them.
Converging Evidence of Ubiquitous Male Bias in Human Sex Perception
Gaetano, Justin; van der Zwan, Rick; Oxner, Matthew; Hayward, William G.; Doring, Natalie; Blair, Duncan; Brooks, Anna
2016-01-01
Visually judging the sex of another can be achieved easily in most social encounters. When the signals that inform such judgements are weak (e.g. outdoors at night), observers tend to expect the presence of males–an expectation that may facilitate survival-critical decisions under uncertainty. The present aim was to examine whether this male bias depends on expertise. To that end, Caucasian and Asian observers targeted female and male hand images that were either the same or different to the observers’ race (i.e. long term experience was varied) while concurrently, the proportion of targets changed across presentation blocks (i.e. short term experience change). It was thus found that: (i) observers of own-race stimuli were more likely to report the presence of males and absence of females, however (ii) observers of other-race stimuli–while still tending to accept stimuli as male–were not prone to rejecting female cues. Finally, (iii) male-biased measures did not track the relative frequency of targets or lures, disputing the notion that male bias derives from prior expectation about the number of male exemplars in a set. Findings are discussed in concert with the pan-stimulus model of human sex perception. PMID:26859570
Disability correlates in Canadian Armed Forces Regular Force Veterans.
Thompson, James M; Pranger, Tina; Sweet, Jill; VanTil, Linda; McColl, Mary Ann; Besemann, Markus; Shubaly, Colleen; Pedlar, David
2015-01-01
This study was undertaken to inform disability mitigation for military veterans by identifying personal, environmental, and health factors associated with activity limitations. A sample of 3154 Canadian Armed Forces Regular Force Veterans who were released during 1998-2007 participated in the 2010 Survey on Transition to Civilian Life. Associations between personal and environmental factors, health conditions and activity limitations were explored using ordinal logistic regression. The prevalence of activity reduction in life domains was higher than the Canadian general population (49% versus 21%), as was needing assistance with at least one activity of daily living (17% versus 5%). Prior to adjusting for health conditions, disability odds were elevated for increased age, females, non-degree post-secondary graduation, low income, junior non-commissioned members, deployment, low social support, low mastery, high life stress, and weak sense of community belonging. Reduced odds were found for private/recruit ranks. Disability odds were highest for chronic pain (10.9), any mental health condition (2.7), and musculoskeletal conditions (2.6), and there was a synergistic additive effect of physical and mental health co-occurrence. Disability, measured as activity limitation, was associated with a range of personal and environmental factors and health conditions, indicating multifactorial and multidisciplinary approaches to disability mitigation.
Jacobsen, Clemma; Corpuz, Rebecca; Forquera, Ralph; Buchwald, Dedra
2017-01-01
This study seeks to ascertain whether a culturally tailored art calendar could improve participation in cancer screening activities. We conducted a randomized, controlled calendar mail-out in which a Native art calendar was sent by first class mail to 5,633 patients seen at an urban American Indian clinic during the prior 2 years. Using random assignment, half of the patients were mailed a “message” calendar with screening information and reminders on breast, colorectal, lung, and prostate cancer; the other half received a calendar without messages. The receipt of cancer screening services was ascertained through chart abstraction in the following 15 months. In total, 5,363 observations (health messages n=2,695; no messages n=2,668) were analyzed. The calendar with health messages did not result in increased receipt of any cancer-related prevention outcome compared to the calendar without health messages. We solicited clinic input to create a culturally appropriate visual intervention to increase cancer screening in a vulnerable, underserved urban population. Our results suggest that printed materials with health messages are likely too weak an intervention to produce the desired behavioral outcomes in cancer screening. PMID:21472495
Doorenbos, Ardith Z; Jacobsen, Clemma; Corpuz, Rebecca; Forquera, Ralph; Buchwald, Dedra
2011-09-01
This study seeks to ascertain whether a culturally tailored art calendar could improve participation in cancer screening activities. We conducted a randomized, controlled calendar mail-out in which a Native art calendar was sent by first class mail to 5,633 patients seen at an urban American Indian clinic during the prior 2 years. Using random assignment, half of the patients were mailed a "message" calendar with screening information and reminders on breast, colorectal, lung, and prostate cancer; the other half received a calendar without messages. The receipt of cancer screening services was ascertained through chart abstraction in the following 15 months. In total, 5,363 observations (health messages n = 2,695; no messages n = 2,668) were analyzed. The calendar with health messages did not result in increased receipt of any cancer-related prevention outcome compared to the calendar without health messages. We solicited clinic input to create a culturally appropriate visual intervention to increase cancer screening in a vulnerable, underserved urban population. Our results suggest that printed materials with health messages are likely too weak an intervention to produce the desired behavioral outcomes in cancer screening.
Bayesian logistic regression approaches to predict incorrect DRG assignment.
Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural
2018-05-07
Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.
Identifying risk for language impairment in children from linguistically diverse low-income schools.
Jacobson, Peggy F; Thompson Miller, Suzanne
2017-12-07
To improve screening procedures for children in a linguistically diverse context, we combined tasks known to reveal grammatical deficits in children with language impairment (LI) with training to facilitate performance on a verb elicitation task. Sixty-four first grade children participated. The objective grammatical measures included elicitation of 12 past tense regular verbs preceded by a teaching phase (teach-test), the sentence recall (SR) subtest of the Clinical evaluation of language fundamentals (CELF-4), and a tally of all conjugated verbs from a narrative retell task. Given the widespread reliance on teacher observation for the referral of children suspected of having LI, we compared our results to the spoken language portion of the CELF-4 teacher observational rating scale (ORS). Using teacher observation as a reference for comparison, the past tense elicitation task and the SR task yielded strong discriminating power, but the verb tally was relatively weak. However, combining the three tasks yielded the highest levels of sensitivity (75%) and specificity (92%) than any single measure on its own. This study contributes to alternative assessment practices by highlighting the potential utility of adding a teaching component prior to administering informal grammatical probes.
Bayesian Estimation of the Spatially Varying Completeness Magnitude of Earthquake Catalogs
NASA Astrophysics Data System (ADS)
Mignan, A.; Werner, M.; Wiemer, S.; Chen, C.; Wu, Y.
2010-12-01
Assessing the completeness magnitude Mc of earthquake catalogs is an essential prerequisite for any seismicity analysis. We employ a simple model to compute Mc in space, based on the proximity to seismic stations in a network. We show that a relationship of the form Mcpred(d) = ad^b+c, with d the distance to the 5th nearest seismic station, fits the observations well. We then propose a new Mc mapping approach, the Bayesian Magnitude of Completeness (BMC) method, based on a 2-step procedure: (1) a spatial resolution optimization to minimize spatial heterogeneities and uncertainties in Mc estimates and (2) a Bayesian approach that merges prior information about Mc based on the proximity to seismic stations with locally observed values weighted by their respective uncertainties. This new methodology eliminates most weaknesses associated with current Mc mapping procedures: the radius that defines which earthquakes to include in the local magnitude distribution is chosen according to an objective criterion and there are no gaps in the spatial estimation of Mc. The method solely requires the coordinates of seismic stations. Here, we investigate the Taiwan Central Weather Bureau (CWB) earthquake catalog by computing a Mc map for the period 1994-2010.
Ou, Jian; Chen, Yongguang; Zhao, Feng; Liu, Jin; Xiao, Shunping
2017-03-19
The extensive applications of multi-function radars (MFRs) have presented a great challenge to the technologies of radar countermeasures (RCMs) and electronic intelligence (ELINT). The recently proposed cognitive electronic warfare (CEW) provides a good solution, whose crux is to perceive present and future MFR behaviours, including the operating modes, waveform parameters, scheduling schemes, etc. Due to the variety and complexity of MFR waveforms, the existing approaches have the drawbacks of inefficiency and weak practicability in prediction. A novel method for MFR behaviour recognition and prediction is proposed based on predictive state representation (PSR). With the proposed approach, operating modes of MFR are recognized by accumulating the predictive states, instead of using fixed transition probabilities that are unavailable in the battlefield. It helps to reduce the dependence of MFR on prior information. And MFR signals can be quickly predicted by iteratively using the predicted observation, avoiding the very large computation brought by the uncertainty of future observations. Simulations with a hypothetical MFR signal sequence in a typical scenario are presented, showing that the proposed methods perform well and efficiently, which attests to their validity.
Ou, Jian; Chen, Yongguang; Zhao, Feng; Liu, Jin; Xiao, Shunping
2017-01-01
The extensive applications of multi-function radars (MFRs) have presented a great challenge to the technologies of radar countermeasures (RCMs) and electronic intelligence (ELINT). The recently proposed cognitive electronic warfare (CEW) provides a good solution, whose crux is to perceive present and future MFR behaviours, including the operating modes, waveform parameters, scheduling schemes, etc. Due to the variety and complexity of MFR waveforms, the existing approaches have the drawbacks of inefficiency and weak practicability in prediction. A novel method for MFR behaviour recognition and prediction is proposed based on predictive state representation (PSR). With the proposed approach, operating modes of MFR are recognized by accumulating the predictive states, instead of using fixed transition probabilities that are unavailable in the battlefield. It helps to reduce the dependence of MFR on prior information. And MFR signals can be quickly predicted by iteratively using the predicted observation, avoiding the very large computation brought by the uncertainty of future observations. Simulations with a hypothetical MFR signal sequence in a typical scenario are presented, showing that the proposed methods perform well and efficiently, which attests to their validity. PMID:28335492
NASA Astrophysics Data System (ADS)
Lavrentiev, N. A.; Rodimova, O. B.; Fazliev, A. Z.; Vigasin, A. A.
2017-11-01
An approach is suggested to the formation of applied ontologies in subject domains where results are represented in graphical form. An approach to systematization of research graphics is also given which contains information on weakly bound carbon dioxide complexes. The results of systematization of research plots and images that characterize the spectral properties of the CO2 complexes are presented.
2003-01-01
xml Internet . Teal Group Corp. Aviation Week and Space Technology , 18 March 2003, 1. 62 Babak Minovi, “Turbine Industry Struggles with Weak Markets ...xml Internet . Teal Group Corp. Aviation Week and Space Technology , 18 March 2003, 1. 64 Babak Minovi, “Turbine Industry Struggles with Weak Markets ...what several executives referred to as the “perfect storm” now blowing through the aviation market . With this information many questions remain: Will
Proportion estimation using prior cluster purities
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
The prior distribution of CLASSY component purities is studied, and this information incorporated into maximum likelihood crop proportion estimators. The method is tested on Transition Year spring small grain segments.
Bayesian road safety analysis: incorporation of past evidence and effect of hyper-prior choice.
Miranda-Moreno, Luis F; Heydari, Shahram; Lord, Dominique; Fu, Liping
2013-09-01
This paper aims to address two related issues when applying hierarchical Bayesian models for road safety analysis, namely: (a) how to incorporate available information from previous studies or past experiences in the (hyper) prior distributions for model parameters and (b) what are the potential benefits of incorporating past evidence on the results of a road safety analysis when working with scarce accident data (i.e., when calibrating models with crash datasets characterized by a very low average number of accidents and a small number of sites). A simulation framework was developed to evaluate the performance of alternative hyper-priors including informative and non-informative Gamma, Pareto, as well as Uniform distributions. Based on this simulation framework, different data scenarios (i.e., number of observations and years of data) were defined and tested using crash data collected at 3-legged rural intersections in California and crash data collected for rural 4-lane highway segments in Texas. This study shows how the accuracy of model parameter estimates (inverse dispersion parameter) is considerably improved when incorporating past evidence, in particular when working with the small number of observations and crash data with low mean. The results also illustrates that when the sample size (more than 100 sites) and the number of years of crash data is relatively large, neither the incorporation of past experience nor the choice of the hyper-prior distribution may affect the final results of a traffic safety analysis. As a potential solution to the problem of low sample mean and small sample size, this paper suggests some practical guidance on how to incorporate past evidence into informative hyper-priors. By combining evidence from past studies and data available, the model parameter estimates can significantly be improved. The effect of prior choice seems to be less important on the hotspot identification. The results show the benefits of incorporating prior information when working with limited crash data in road safety studies. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.
Maximizing the information learned from finite data selects a simple model
NASA Astrophysics Data System (ADS)
Mattingly, Henry H.; Transtrum, Mark K.; Abbott, Michael C.; Machta, Benjamin B.
2018-02-01
We use the language of uninformative Bayesian prior choice to study the selection of appropriately simple effective models. We advocate for the prior which maximizes the mutual information between parameters and predictions, learning as much as possible from limited data. When many parameters are poorly constrained by the available data, we find that this prior puts weight only on boundaries of the parameter space. Thus, it selects a lower-dimensional effective theory in a principled way, ignoring irrelevant parameter directions. In the limit where there are sufficient data to tightly constrain any number of parameters, this reduces to the Jeffreys prior. However, we argue that this limit is pathological when applied to the hyperribbon parameter manifolds generic in science, because it leads to dramatic dependence on effects invisible to experiment.
Topical video object discovery from key frames by modeling word co-occurrence prior.
Zhao, Gangqiang; Yuan, Junsong; Hua, Gang; Yang, Jiong
2015-12-01
A topical video object refers to an object, that is, frequently highlighted in a video. It could be, e.g., the product logo and the leading actor/actress in a TV commercial. We propose a topic model that incorporates a word co-occurrence prior for efficient discovery of topical video objects from a set of key frames. Previous work using topic models, such as latent Dirichelet allocation (LDA), for video object discovery often takes a bag-of-visual-words representation, which ignored important co-occurrence information among the local features. We show that such data driven co-occurrence information from bottom-up can conveniently be incorporated in LDA with a Gaussian Markov prior, which combines top-down probabilistic topic modeling with bottom-up priors in a unified model. Our experiments on challenging videos demonstrate that the proposed approach can discover different types of topical objects despite variations in scale, view-point, color and lighting changes, or even partial occlusions. The efficacy of the co-occurrence prior is clearly demonstrated when compared with topic models without such priors.
Ale, Angelique; Schulz, Ralf B; Sarantopoulos, Athanasios; Ntziachristos, Vasilis
2010-05-01
The performance is studied of two newly introduced and previously suggested methods that incorporate priors into inversion schemes associated with data from a recently developed hybrid x-ray computed tomography and fluorescence molecular tomography system, the latter based on CCD camera photon detection. The unique data set studied attains accurately registered data of high spatially sampled photon fields propagating through tissue along 360 degrees projections. Approaches that incorporate structural prior information were included in the inverse problem by adding a penalty term to the minimization function utilized for image reconstructions. Results were compared as to their performance with simulated and experimental data from a lung inflammation animal model and against the inversions achieved when not using priors. The importance of using priors over stand-alone inversions is also showcased with high spatial sampling simulated and experimental data. The approach of optimal performance in resolving fluorescent biodistribution in small animals is also discussed. Inclusion of prior information from x-ray CT data in the reconstruction of the fluorescence biodistribution leads to improved agreement between the reconstruction and validation images for both simulated and experimental data.
Information Security: Serious Weakness Put State Department and FAA Operations at Risk
DOT National Transportation Integrated Search
1998-05-19
Testimony focuses on the results of recent reviews of computer security at the Department of State and the Federal Aviation Administration (FAA). Makes specific recommendations for improving State and FAA's information security posture. Highlights be...
Education review: a graduate course in management information systems in health care.
Glaser, J P
1994-08-01
The article presents and discusses a graduate course in managing information systems in health care delivery organizations. The article presents the course content, assignments, and syllabus and reviews the strengths and weaknesses of the course.
Synchronous contextual irregularities affect early scene processing: replication and extension.
Mudrik, Liad; Shalgi, Shani; Lamy, Dominique; Deouell, Leon Y
2014-04-01
Whether contextual regularities facilitate perceptual stages of scene processing is widely debated, and empirical evidence is still inconclusive. Specifically, it was recently suggested that contextual violations affect early processing of a scene only when the incongruent object and the scene are presented a-synchronously, creating expectations. We compared event-related potentials (ERPs) evoked by scenes that depicted a person performing an action using either a congruent or an incongruent object (e.g., a man shaving with a razor or with a fork) when scene and object were presented simultaneously. We also explored the role of attention in contextual processing by using a pre-cue to direct subjects׳ attention towards or away from the congruent/incongruent object. Subjects׳ task was to determine how many hands the person in the picture used in order to perform the action. We replicated our previous findings of frontocentral negativity for incongruent scenes that started ~ 210 ms post stimulus presentation, even earlier than previously found. Surprisingly, this incongruency ERP effect was negatively correlated with the reaction times cost on incongruent scenes. The results did not allow us to draw conclusions about the role of attention in detecting the regularity, due to a weak attention manipulation. By replicating the 200-300 ms incongruity effect with a new group of subjects at even earlier latencies than previously reported, the results strengthen the evidence for contextual processing during this time window even when simultaneous presentation of the scene and object prevent the formation of prior expectations. We discuss possible methodological limitations that may account for previous failures to find this an effect, and conclude that contextual information affects object model selection processes prior to full object identification, with semantic knowledge activation stages unfolding only later on. Copyright © 2014 Elsevier Ltd. All rights reserved.
Different Neuroplasticity for Task Targets and Distractors
Spingath, Elsie Y.; Kang, Hyun Sug; Plummer, Thane; Blake, David T.
2011-01-01
Adult learning-induced sensory cortex plasticity results in enhanced action potential rates in neurons that have the most relevant information for the task, or those that respond strongly to one sensory stimulus but weakly to its comparison stimulus. Current theories suggest this plasticity is caused when target stimulus evoked activity is enhanced by reward signals from neuromodulatory nuclei. Prior work has found evidence suggestive of nonselective enhancement of neural responses, and suppression of responses to task distractors, but the differences in these effects between detection and discrimination have not been directly tested. Using cortical implants, we defined physiological responses in macaque somatosensory cortex during serial, matched, detection and discrimination tasks. Nonselective increases in neural responsiveness were observed during detection learning. Suppression of responses to task distractors was observed during discrimination learning, and this suppression was specific to cortical locations that sampled responses to the task distractor before learning. Changes in receptive field size were measured as the area of skin that had a significant response to a constant magnitude stimulus, and these areal changes paralleled changes in responsiveness. From before detection learning until after discrimination learning, the enduring changes were selective suppression of cortical locations responsive to task distractors, and nonselective enhancement of responsiveness at cortical locations selective for target and control skin sites. A comparison of observations in prior studies with the observed plasticity effects suggests that the non-selective response enhancement and selective suppression suffice to explain known plasticity phenomena in simple spatial tasks. This work suggests that differential responsiveness to task targets and distractors in primary sensory cortex for a simple spatial detection and discrimination task arise from nonselective increases in response over a broad cortical locus that includes the representation of the task target, and selective suppression of responses to the task distractor within this locus. PMID:21297962
Zengel, Bettina; Ambler, James K; McCarthy, Randy J; Skowronski, John J
2017-01-01
This article reports results from a study in which participants encountered either (a) previously known informants who were positive (e.g. Abraham Lincoln), neutral (e.g., Jay Leno), or negative (e.g., Adolf Hitler), or (b) previously unknown informants. The informants ostensibly described either a trait-implicative positive behavior, a trait-implicative negative behavior, or a neutral behavior. These descriptions were framed as either the behavior of the informant or the behavior of another person. Results yielded evidence of informant-trait linkages for both self-informants and for informants who described another person. These effects were not moderated by informant type, behavior valence, or the congruency or incongruency between the prior knowledge of the informant and the behavior valence. Results are discussed in terms of theories of Spontaneous Trait Inference and Spontaneous Trait Transference.
An investigation of multitasking information behavior and the influence of working memory and flow
NASA Astrophysics Data System (ADS)
Alexopoulou, Peggy; Hepworth, Mark; Morris, Anne
2015-02-01
This study explored the multitasking information behaviour of Web users and how this is influenced by working memory, flow and Personal, Artefact and Task characteristics, as described in the PAT model. The research was exploratory using a pragmatic, mixed method approach. Thirty University students participated; 10 psychologists, 10 accountants and 10 mechanical engineers. The data collection tools used were: pre and post questionnaires, a working memory test, a flow state scale test, audio-visual data, web search logs, think aloud data, observation, and the critical decision method. All participants searched information on the Web for four topics: two for which they had prior knowledge and two more without prior knowledge. Perception of task complexity was found to be related to working memory. People with low working memory reported a significant increase in task complexity after they had completed information searching tasks for which they had no prior knowledge, this was not the case for tasks with prior knowledge. Regarding flow and task complexity, the results confirmed the suggestion of the PAT model (Finneran and Zhang, 2003), which proposed that a complex task can lead to anxiety and low flow levels as well as to perceived challenge and high flow levels. However, the results did not confirm the suggestion of the PAT model regarding the characteristics of web search systems and especially perceived vividness. All participants experienced high vividness. According to the PAT model, however, only people with high flow should experience high levels of vividness. Flow affected the degree of change of knowledge of the participants. People with high flow gained more knowledge for tasks without prior knowledge rather than people with low flow. Furthermore, accountants felt that tasks without prior knowledge were less complex at the end of the web seeking procedure than psychologists and mechanical engineers. Finally, the three disciplines appeared to differ regarding the multitasking information behaviour characteristics such as queries, web search sessions and opened tabs/windows.
Quantum Information Theory of Measurement
NASA Astrophysics Data System (ADS)
Glick, Jennifer Ranae
Quantum measurement lies at the heart of quantum information processing and is one of the criteria for quantum computation. Despite its central role, there remains a need for a robust quantum information-theoretical description of measurement. In this work, I will quantify how information is processed in a quantum measurement by framing it in quantum information-theoretic terms. I will consider a diverse set of measurement scenarios, including weak and strong measurements, and parallel and consecutive measurements. In each case, I will perform a comprehensive analysis of the role of entanglement and entropy in the measurement process and track the flow of information through all subsystems. In particular, I will discuss how weak and strong measurements are fundamentally of the same nature and show that weak values can be computed exactly for certain measurements with an arbitrary interaction strength. In the context of the Bell-state quantum eraser, I will derive a trade-off between the coherence and "which-path" information of an entangled pair of photons and show that a quantum information-theoretic approach yields additional insights into the origins of complementarity. I will consider two types of quantum measurements: those that are made within a closed system where every part of the measurement device, the ancilla, remains under control (what I will call unamplified measurements), and those performed within an open system where some degrees of freedom are traced over (amplified measurements). For sequences of measurements of the same quantum system, I will show that information about the quantum state is encoded in the measurement chain and that some of this information is "lost" when the measurements are amplified-the ancillae become equivalent to a quantum Markov chain. Finally, using the coherent structure of unamplified measurements, I will outline a protocol for generating remote entanglement, an essential resource for quantum teleportation and quantum cryptographic tasks.
Bishop, Michael Jason; Crow, Brian S; Kovalcik, Kasey D; George, Joe; Bralley, James A
2007-04-01
A rapid and accurate quantitative method was developed and validated for the analysis of four urinary organic acids with nitrogen containing functional groups, formiminoglutamic acid (FIGLU), pyroglutamic acid (PYRGLU), 5-hydroxyindoleacetic acid (5-HIAA), and 2-methylhippuric acid (2-METHIP) by liquid chromatography tandem mass spectrometry (LC/MS/MS). The chromatography was developed using a weak anion-exchange amino column that provided mixed-mode retention of the analytes. The elution gradient relied on changes in mobile phase pH over a concave gradient, without the use of counter-ions or concentrated salt buffers. A simple sample preparation was used, only requiring the dilution of urine prior to instrumental analysis. The method was validated based on linearity (r2>or=0.995), accuracy (85-115%), precision (C.V.<12%), sample preparation stability (
NASA Astrophysics Data System (ADS)
Ashraf Mohamad Ismail, Mohd; Ng, Soon Min; Hazreek Zainal Abidin, Mohd; Madun, Aziman
2018-04-01
The application of geophysical seismic refraction for slope stabilization design using soil nailing method was demonstrated in this study. The potential weak layer of the study area is first identify prior to determining the appropriate length and location of the soil nail. A total of 7 seismic refraction survey lines were conducted at the study area with standard procedures. The refraction data were then analyzed by using the Pickwin and Plotrefa computer software package to obtain the seismic velocity profiles distribution. These results were correlated with the complementary borehole data to interpret the subsurface profile of the study area. It has been identified that layer 1 to 3 is the potential weak zone susceptible to slope failure. Hence, soil nails should be installed to transfer the tensile load from the less stable layer 3 to the more stable layer 4. The soil-nail interaction will provide a reinforcing action to the soil mass thereby increasing the stability of the slope.
Praveen, Paurush; Fröhlich, Holger
2013-01-01
Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available. PMID:23826291
Guided transect sampling - a new design combining prior information and field surveying
Anna Ringvall; Goran Stahl; Tomas Lamas
2000-01-01
Guided transect sampling is a two-stage sampling design in which prior information is used to guide the field survey in the second stage. In the first stage, broad strips are randomly selected and divided into grid-cells. For each cell a covariate value is estimated from remote sensing data, for example. The covariate is the basis for subsampling of a transect through...
Cooper, Valentino R.; Lee, Jun Hee; Krogel, Jaron T.; ...
2015-08-06
Multiferroic BiFeO 3 exhibits excellent magnetoelectric coupling critical for magnetic information processing with minimal power consumption. Thus, the degenerate nature of the easy spin axis in the (111) plane presents roadblocks for real world applications. Here, we explore the stabilization and switchability of the weak ferromagnetic moments under applied epitaxial strain using a combination of first-principles calculations and group-theoretic analyses. We demonstrate that the antiferromagnetic moment vector can be stabilized along unique crystallographic directions ([110] and [-110]) under compressive and tensile strains. A direct coupling between the anisotropic antiferrodistortive rotations and Dzyaloshinskii-Moria interactions drives the stabilization of weak ferromagnetism. Furthermore,more » energetically competing C- and G-type magnetic orderings are observed at high compressive strains, suggesting that it may be possible to switch the weak ferromagnetism on and off under application of strain. These findings emphasize the importance of strain and antiferrodistortive rotations as routes to enhancing induced weak ferromagnetism in multiferroic oxides.« less
Weak values, 'negative probability', and the uncertainty principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokolovski, D.
2007-10-15
A quantum transition can be seen as a result of interference between various pathways (e.g., Feynman paths), which can be labeled by a variable f. An attempt to determine the value of f without destroying the coherence between the pathways produces a weak value of f. We show f to be an average obtained with an amplitude distribution which can, in general, take negative values, which, in accordance with the uncertainty principle, need not contain information about the actual range of f which contributes to the transition. It is also demonstrated that the moments of such alternating distributions have amore » number of unusual properties which may lead to a misinterpretation of the weak-measurement results. We provide a detailed analysis of weak measurements with and without post-selection. Examples include the double-slit diffraction experiment, weak von Neumann and von Neumann-like measurements, traversal time for an elastic collision, phase time, and local angular momentum.« less
Weak-interaction rates in stellar conditions
NASA Astrophysics Data System (ADS)
Sarriguren, Pedro
2018-05-01
Weak-interaction rates, including β-decay and electron captures, are studied in several mass regions at various densities and temperatures of astrophysical interest. In particular, we study odd-A nuclei in the pf-shell region, which are involved in presupernova formations. Weak rates are relevant to understand the late stages of the stellar evolution, as well as the nucleosynthesis of heavy nuclei. The nuclear structure involved in the weak processes is studied within a quasiparticle proton-neutron random-phase approximation with residual interactions in both particle-hole and particle-particle channels on top of a deformed Skyrme Hartree-Fock mean field with pairing correlations. First, the energy distributions of the Gamow-Teller strength are discussed and compared with the available experimental information, measured under terrestrial conditions from charge-exchange reactions. Then, the sensitivity of the weak-interaction rates to both astrophysical densities and temperatures is studied. Special attention is paid to the relative contribution to these rates of thermally populated excited states in the decaying nucleus and to the electron captures from the degenerate electron plasma.
NASA Astrophysics Data System (ADS)
Qu, W.; Bogena, H. R.; Huisman, J. A.; Martinez, G.; Pachepsky, Y. A.; Vereecken, H.
2013-12-01
Soil water content is a key variable in the soil, vegetation and atmosphere continuum with high spatial and temporal variability. Temporal stability of soil water content (SWC) has been observed in multiple monitoring studies and the quantification of controls on soil moisture variability and temporal stability presents substantial interest. The objective of this work was to assess the effect of soil hydraulic parameters on the temporal stability. The inverse modeling based on large observed time series SWC with in-situ sensor network was used to estimate the van Genuchten-Mualem (VGM) soil hydraulic parameters in a small grassland catchment located in western Germany. For the inverse modeling, the shuffled complex evaluation (SCE) optimization algorithm was coupled with the HYDRUS 1D code. We considered two cases: without and with prior information about the correlation between VGM parameters. The temporal stability of observed SWC was well pronounced at all observation depths. Both the spatial variability of SWC and the robustness of temporal stability increased with depth. Calibrated models both with and without prior information provided reasonable correspondence between simulated and measured time series of SWC. Furthermore, we found a linear relationship between the mean relative difference (MRD) of SWC and the saturated SWC (θs). Also, the logarithm of saturated hydraulic conductivity (Ks), the VGM parameter n and logarithm of α were strongly correlated with the MRD of saturation degree for the prior information case, but no correlation was found for the non-prior information case except at the 50cm depth. Based on these results we propose that establishing relationships between temporal stability and spatial variability of soil properties presents a promising research avenue for a better understanding of the controls on soil moisture variability. Correlation between Mean Relative Difference of soil water content (or saturation degree) and inversely estimated soil hydraulic parameters (log10(Ks), log10(α), n, and θs) at 5-cm, 20-cm and 50-cm depths. Solid circles represent parameters estimated by using prior information; open circles represent parameters estimated without using prior information.
Analysis of factors related to arm weakness in patients with breast cancer-related lymphedema.
Lee, Daegu; Hwang, Ji Hye; Chu, Inho; Chang, Hyun Ju; Shim, Young Hun; Kim, Jung Hyun
2015-08-01
The aim of this study was to evaluate the ratio of significant weakness in the affected arm of breast cancer-related lymphedema patients to their unaffected side. Another purpose was to identify factors related to arm weakness and physical function in patients with breast cancer-related lymphedema. Consecutive patients (n = 80) attended a single evaluation session following their outpatient lymphedema clinic visit. Possible independent factors (i.e., lymphedema, pain, psychological, educational, and behavioral) were evaluated. Handgrip strength was used to assess upper extremity muscle strength and the disabilities of arm, shoulder, and hand (DASH) questionnaire was used to assess upper extremity physical function. Multivariate logistic regression was performed using factors that had significant differences between the handgrip weakness and non-weakness groups. Out of the 80 patients with breast cancer-related lymphedema, 29 patients (36.3 %) had significant weakness in the affected arm. Weakness of the arm with lymphedema was not related to lymphedema itself, but was related to the fear of using the affected limb (odds ratio = 1.76, 95 % confidence interval = 1.30-2.37). Fears of using the affected limb and depression significantly contributed to the variance in DASH scores. Appropriate physical and psychological interventions, including providing accurate information and reassurance of physical activity safety, are necessary to prevent arm weakness and physical dysfunction in patients with breast cancer-related lymphedema.
ERIC Educational Resources Information Center
Lorenzen, Elizabeth A.; And Others
Directed especially at graduating college seniors, this paper contains information about employment interviews and how to prepare for them. Subjects discussed include the following: preparing for interviews (analyzing strengths and weaknesses, gathering information about the company); points to remember (dress codes, follow up thank-you letters);…
Mixed methods systematic review exploring mentorship outcomes in nursing academia.
Nowell, Lorelli; Norris, Jill M; Mrklas, Kelly; White, Deborah E
2017-03-01
The aim of this study was to report on a mixed methods systematic review that critically examines the evidence for mentorship in nursing academia. Nursing education institutions globally have issued calls for mentorship. There is emerging evidence to support the value of mentorship in other disciplines, but the extant state of the evidence in nursing academia is not known. A comprehensive review of the evidence is required. A mixed methods systematic review. Five databases (MEDLINE, CINAHL, EMBASE, ERIC, PsycINFO) were searched using an a priori search strategy from inception to 2 November 2015 to identify quantitative, qualitative and mixed methods studies. Grey literature searches were also conducted in electronic databases (ProQuest Dissertations and Theses, Index to Theses) and mentorship conference proceedings and by hand searching the reference lists of eligible studies. Study quality was assessed prior to inclusion using standardized critical appraisal instruments from the Joanna Briggs Institute. A convergent qualitative synthesis design was used where results from qualitative, quantitative and mixed methods studies were transformed into qualitative findings. Mentorship outcomes were mapped to a theory-informed framework. Thirty-four studies were included in this review, from the 3001 records initially retrieved. In general, mentorship had a positive impact on behavioural, career, attitudinal, relational and motivational outcomes; however, the methodological quality of studies was weak. This review can inform the objectives of mentorship interventions and contribute to a more rigorous approach to studies that assess mentorship outcomes. © 2016 John Wiley & Sons Ltd.
Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A; Lu, Zhong-Lin; Myung, Jay I
2016-01-01
Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias.
Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A.; Lu, Zhong-Lin; Myung, Jay I.
2016-01-01
Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias. PMID:27105061
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-26
... constitutes prior art increase the need to have accurate and up-to-date ownership information about patent..., section 102(b)(2)(C) exempts as prior art those patent applications or issued patents that name different... issued patent may prevent its use as prior art against a later-filed patent application, patentability...
ERIC Educational Resources Information Center
Happ, Roland; Förster, Manuel; Zlatkin-Troitschanskaia, Olga; Carstensen, Vivian
2016-01-01
Study-related prior knowledge plays a decisive role in business and economics degree courses. Prior knowledge has a significant influence on knowledge acquisition in higher education, and teachers need information on it to plan their introductory courses accordingly. Very few studies have been conducted of first-year students' prior economic…
ERIC Educational Resources Information Center
Popova-Gonci, Viktoria; Lamb, Monica C.
2012-01-01
Prior learning assessment (PLA) students enter academia with different types of concepts--some of them have been formally accepted and labeled by academia and others are informally formulated by students via independent and/or experiential learning. The critical goal of PLA practices is to assess an intricate combination of prior learning…
Determining the structure of Higgs couplings at the CERN LargeHadron Collider.
Plehn, Tilman; Rainwater, David; Zeppenfeld, Dieter
2002-02-04
Higgs boson production via weak boson fusion at the CERN Large Hadron Collider has the capability to determine the dominant CP nature of a Higgs boson, via the tensor structure of its coupling to weak bosons. This information is contained in the azimuthal angle distribution of the two outgoing forward tagging jets. The technique is independent of both the Higgs boson mass and the observed decay channel.
An algorithmic approach to crustal deformation analysis
NASA Technical Reports Server (NTRS)
Iz, Huseyin Baki
1987-01-01
In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.
Zhang, Xiao-Fei; Ou-Yang, Le; Yan, Hong
2017-08-15
Understanding how gene regulatory networks change under different cellular states is important for revealing insights into network dynamics. Gaussian graphical models, which assume that the data follow a joint normal distribution, have been used recently to infer differential networks. However, the distributions of the omics data are non-normal in general. Furthermore, although much biological knowledge (or prior information) has been accumulated, most existing methods ignore the valuable prior information. Therefore, new statistical methods are needed to relax the normality assumption and make full use of prior information. We propose a new differential network analysis method to address the above challenges. Instead of using Gaussian graphical models, we employ a non-paranormal graphical model that can relax the normality assumption. We develop a principled model to take into account the following prior information: (i) a differential edge less likely exists between two genes that do not participate together in the same pathway; (ii) changes in the networks are driven by certain regulator genes that are perturbed across different cellular states and (iii) the differential networks estimated from multi-view gene expression data likely share common structures. Simulation studies demonstrate that our method outperforms other graphical model-based algorithms. We apply our method to identify the differential networks between platinum-sensitive and platinum-resistant ovarian tumors, and the differential networks between the proneural and mesenchymal subtypes of glioblastoma. Hub nodes in the estimated differential networks rediscover known cancer-related regulator genes and contain interesting predictions. The source code is at https://github.com/Zhangxf-ccnu/pDNA. szuouyl@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Recovery from DSM-IV post-traumatic stress disorder in the WHO World Mental Health surveys.
Rosellini, A J; Liu, H; Petukhova, M V; Sampson, N A; Aguilar-Gaxiola, S; Alonso, J; Borges, G; Bruffaerts, R; Bromet, E J; de Girolamo, G; de Jonge, P; Fayyad, J; Florescu, S; Gureje, O; Haro, J M; Hinkov, H; Karam, E G; Kawakami, N; Koenen, K C; Lee, S; Lépine, J P; Levinson, D; Navarro-Mateu, F; Oladeji, B D; O'Neill, S; Pennell, B-E; Piazza, M; Posada-Villa, J; Scott, K M; Stein, D J; Torres, Y; Viana, M C; Zaslavsky, A M; Kessler, R C
2018-02-01
Research on post-traumatic stress disorder (PTSD) course finds a substantial proportion of cases remit within 6 months, a majority within 2 years, and a substantial minority persists for many years. Results are inconsistent about pre-trauma predictors. The WHO World Mental Health surveys assessed lifetime DSM-IV PTSD presence-course after one randomly-selected trauma, allowing retrospective estimates of PTSD duration. Prior traumas, childhood adversities (CAs), and other lifetime DSM-IV mental disorders were examined as predictors using discrete-time person-month survival analysis among the 1575 respondents with lifetime PTSD. 20%, 27%, and 50% of cases recovered within 3, 6, and 24 months and 77% within 10 years (the longest duration allowing stable estimates). Time-related recall bias was found largely for recoveries after 24 months. Recovery was weakly related to most trauma types other than very low [odds-ratio (OR) 0.2-0.3] early-recovery (within 24 months) associated with purposefully injuring/torturing/killing and witnessing atrocities and very low later-recovery (25+ months) associated with being kidnapped. The significant ORs for prior traumas, CAs, and mental disorders were generally inconsistent between early- and later-recovery models. Cross-validated versions of final models nonetheless discriminated significantly between the 50% of respondents with highest and lowest predicted probabilities of both early-recovery (66-55% v. 43%) and later-recovery (75-68% v. 39%). We found PTSD recovery trajectories similar to those in previous studies. The weak associations of pre-trauma factors with recovery, also consistent with previous studies, presumably are due to stronger influences of post-trauma factors.
Line Assignments and Position Measurements in Several Weak CO2 Bands Between 4590/cm and 7930/cm
NASA Technical Reports Server (NTRS)
Giver, L. P.; Kshirsagar, R. J.; Freedman, R. C.; Chackerian, C., Jr.; Wattson, R. B.; Gore, Warren J. (Technical Monitor)
1998-01-01
A substantial set of CO2 spectra from 4500 to 12000/cm has been obtained at Ames with 1500 m path length using a Bomem DA8 FTS. The signal/noise was improved compared to prior spectra obtained in this laboratory by including a filter wheel limiting the band-pass of each spectrum to several hundred per cm. We have measured positions of lines in several weak bands not previously resolved in laboratory spectra. Using our positions and assignments of lines of the Qbranch of the 31103-00001 vibrational band at 4591/cm, we have redetermined the rotational constants for the 31103f levels. Q-branch lines of this band were previously observed, but misassigned, in Venus spectra by Mandin. The current HITRAN values of the rotational constants for this level are incorrect due to the Q-branch misassignments. Our prior measurements of the 21122-00001 vibrational band at 7901/cm were limited to Q-and R-branch lines; with the improved signal/noise of these new spectra we have now measured lines in the weaker P branch. The 21122 (Gv = 790148/cm) levels are known to be perturbed by the 32211 (G(sub v) = 789757/cm) levels; new DND calculations predict that high-J lines of the forbidden 32211-00001 vibrational band 'borrow' intensity from the corresponding transitions of the 21122-00001 band. We have identified such Q- and R-branch transitions of the 32211-00001 band from 26 < J" < 44, based on our position measurements of lines in the 32211-02201 band at 6562/cm.
Evaluation of soluble CD30 as an immunologic marker in heart transplant recipients.
Spiridon, C; Hunt, J; Mack, M; Rosenthal, J; Anderson, A; Eichhorn, E; Magee, M; Dewey, T; Currier, M; Nikaein, A
2006-12-01
CD30 is an immunologic molecule that belongs to the TNF-R superfamily. CD30 serves as a T-cell signal transducing molecule that is expressed by a subset of activated T lymphocytes, CD45RO+ memory T cells. Augmentation of soluble CD30 during kidney transplant rejection has been reported. Our study sought to determine whether the level of sCD30 prior to heart transplant could categorize patients into high versus low immunologic risk for a poor outcome. A significant correlation was observed between high levels of soluble CD30 and a reduced incidence of infection. None of the 35 patients with high pretransplant levels of sCD30 level (>90 U/mL) developed infections posttransplantation. However, 9 of 65 patients who had low levels of sCD30 (<90 U/mL) developed infections posttransplantation (P < .02). No remarkable differences were noted among the other clinical parameters. The results also showed that the high-definition flow-bead (HDB) assay detected both weak and strong class I and class II HLA antibodies, some of which (weak class II HLA Abs) were undetectable by the anti-human globulin cytotoxicity method. In addition, more antibody specificities were detected by HDB. In conclusion, we have observed that high levels of sCD30 prior to heart transplant may be associated with greater immunologic ability and therefore produce a protective effect on the development of infection post heart transplant. We have also shown that the HDB assay is superior to the visual cytotoxicity method to detect HLA antibodies, especially those to class II HLA antigens.
Mobile devices and weak ties: a study of vision impairments and workplace access in Bangalore.
Pal, Joyojeet; Lakshmanan, Meera
2015-07-01
To explore ways in which social and economic interactions are changed by access to mobile telephony. This is a mixed-methods study of mobile phone use among 52 urban professionals with vision impairments in Bangalore, India. Interviews and survey results indicated that mobile devices, specifically those with adaptive technology software, play a vital role as multi-purpose devices that enable people with disabilities to navigate economically and socially in an environment where accessibility remains a significant challenge. We found that mobile devices play a central role in enabling and sustaining weak ties, but also that these weak ties have important gender-specific implications. We found that women have less access to weak ties than men, which impacts women's access to assistive technology (AT). This has potential implications for women's sense of safety and independence, both of which are strongly related to AT access. Implications for Rehabilitation Adaptive technologies increase individuals' ability to keep in contact with casual connections or weak ties through phone calls or social media. Men tend to have stronger access to weak ties than women in India due to cultural impediments to independent access to public spaces. Weak ties are an important source of assistive technology (AT) due to the high rate of resale of used AT, typically through informal networks.
ERIC Educational Resources Information Center
Moller, Peter
1980-01-01
Describes electroreceptivity in fishes, including information on electric signals in water, electroreceptors, electric organs, electric sense in weak-electric fishes, electrolocation, electrocommunication, and evolutionary considerations. (CS)
How much to trust the senses: Likelihood learning
Sato, Yoshiyuki; Kording, Konrad P.
2014-01-01
Our brain often needs to estimate unknown variables from imperfect information. Our knowledge about the statistical distributions of quantities in our environment (called priors) and currently available information from sensory inputs (called likelihood) are the basis of all Bayesian models of perception and action. While we know that priors are learned, most studies of prior-likelihood integration simply assume that subjects know about the likelihood. However, as the quality of sensory inputs change over time, we also need to learn about new likelihoods. Here, we show that human subjects readily learn the distribution of visual cues (likelihood function) in a way that can be predicted by models of statistically optimal learning. Using a likelihood that depended on color context, we found that a learned likelihood generalized to new priors. Thus, we conclude that subjects learn about likelihood. PMID:25398975
Contribution of prior semantic knowledge to new episodic learning in amnesia.
Kan, Irene P; Alexander, Michael P; Verfaellie, Mieke
2009-05-01
We evaluated whether prior semantic knowledge would enhance episodic learning in amnesia. Subjects studied prices that are either congruent or incongruent with prior price knowledge for grocery and household items and then performed a forced-choice recognition test for the studied prices. Consistent with a previous report, healthy controls' performance was enhanced by price knowledge congruency; however, only a subset of amnesic patients experienced the same benefit. Whereas patients with relatively intact semantic systems, as measured by an anatomical measure (i.e., lesion involvement of anterior and lateral temporal lobes), experienced a significant congruency benefit, patients with compromised semantic systems did not experience a congruency benefit. Our findings suggest that when prior knowledge structures are intact, they can support acquisition of new episodic information by providing frameworks into which such information can be incorporated.
Assignment of a non-informative prior when using a calibration function
NASA Astrophysics Data System (ADS)
Lira, I.; Grientschnig, D.
2012-01-01
The evaluation of measurement uncertainty associated with the use of calibration functions was addressed in a talk at the 19th IMEKO World Congress 2009 in Lisbon (Proceedings, pp 2346-51). Therein, an example involving a cubic function was analysed by a Bayesian approach and by the Monte Carlo method described in Supplement 1 to the 'Guide to the Expression of Uncertainty in Measurement'. Results were found to be discrepant. In this paper we examine a simplified version of the example and show that the reported discrepancy is caused by the choice of the prior in the Bayesian analysis, which does not conform to formal rules for encoding the absence of prior knowledge. Two options for assigning a non-informative prior free from this shortcoming are considered; they are shown to be equivalent.
Lee, Tian-Fu; Liu, Chuan-Ming
2013-06-01
A smart-card based authentication scheme for telecare medicine information systems enables patients, doctors, nurses, health visitors and the medicine information systems to establish a secure communication platform through public networks. Zhu recently presented an improved authentication scheme in order to solve the weakness of the authentication scheme of Wei et al., where the off-line password guessing attacks cannot be resisted. This investigation indicates that the improved scheme of Zhu has some faults such that the authentication scheme cannot execute correctly and is vulnerable to the attack of parallel sessions. Additionally, an enhanced authentication scheme based on the scheme of Zhu is proposed. The enhanced scheme not only avoids the weakness in the original scheme, but also provides users' anonymity and authenticated key agreements for secure data communications.
75 FR 42725 - Notice of Proposed Information Collection Requests
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-22
... Management Services, Office of Management, invites comments on the proposed information collection requests.... The Director, Information Collection Clearance Division, Regulatory Information Management Services, Office of Management, publishes that notice containing proposed information collection requests prior to...
75 FR 34989 - Notice of Proposed Information Collection Requests
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-21
... Management Services, Office of Management, invites comments on the proposed information collection requests.... The Director, Information Collection Clearance Division, Regulatory Information Management Services, Office of Management, publishes that notice containing proposed information collection requests prior to...
75 FR 37415 - Notice of Proposed Information Collection Requests
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-29
... Management Services, Office of Management, invites comments on the proposed information collection requests.... The Director, Information Collection Clearance Division, Regulatory Information Management Services, Office of Management, publishes that notice containing proposed information collection requests prior to...
75 FR 39214 - Notice of Proposed Information Collection Requests
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-08
... Management Services, Office of Management, invites comments on the proposed information collection requests.... The Director, Information Collection Clearance Division, Regulatory Information Management Services, Office of Management, publishes that notice containing proposed information collection requests prior to...
75 FR 51449 - Notice of Proposed Information Collection Requests
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-20
... Management Services, Office of Management, invites comments on the proposed information collection requests.... The Director, Information Collection Clearance Division, Regulatory Information Management Services, Office of Management, publishes that notice containing proposed information collection requests prior to...
75 FR 34990 - Notice of Proposed Information Collection Requests
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-21
... Management Services, Office of Management, invites comments on the proposed information collection requests.... The Director, Information Collection Clearance Division, Regulatory Information Management Services, Office of Management, publishes that notice containing proposed information collection requests prior to...
Amiche, M A; Albaum, J M; Tadrous, M; Pechlivanoglou, P; Lévesque, L E; Adachi, J D; Cadarette, S M
2016-06-01
Efficacy of osteoporosis medication is not well-established among patients taking oral glucocorticoids. We assessed the efficacy of approved osteoporosis pharmacotherapies in preventing fracture by combining data from randomized controlled trials. Teriparatide, risedronate, and etidronate were associated with decreased vertebral fracture risk. Several osteoporosis drugs are approved for the prevention and treatment of glucocorticoid (GC)-induced osteoporosis. However, the efficacy of these treatments among oral GC users is still limited. We aimed to examine the comparative efficacy of osteoporosis treatments among oral GC users. We updated a systematic review through to March 2015 to identify all double-blinded randomized controlled trials (RCTs) that examined osteoporosis treatment among oral GC users. We used a network meta-analysis with informative priors to derive comparative risk ratios (RRs) and 95 % credible intervals (95 % CrI) for vertebral and non-vertebral fracture and mean differences in lumbar spine (LS) and femoral neck (FN) bone mineral density (BMD). Treatment ranking was estimated using the surface under the cumulative ranking curve (SUCRA) statistic. A meta-regression was completed to assess a subgroup effect between patients with prior GC exposures and GC initiators. We identified 27 eligible RCTs examining nine active comparators. Etidronate (RR, 0.41; 95%CrI = 0.17-0.90), risedronate (RR = 0.30, 95%CrI = 0.14-0.61), and teriparatide (RR = 0.07, 95%CrI = 0.001-0.48) showed greater efficacy than placebo in preventing vertebral fractures; yet, no treatment effects were statistically significant in reducing non-vertebral fractures. Alendronate, risedronate, and etidronate increased LS BMD while alendronate and raloxifene increased FN BMD. In preventing vertebral fractures, teriparatide was ranked as the best treatment (SUCRA: 77 %), followed by risedronate (77 %) and zoledronic acid (76 %). For non-vertebral fractures, teriparatide also had the highest SUCRA (69 %), followed by risedronate (64 %). No subgroup effect was identified with regards to prior GC exposure. Despite weak trial evidence available for fracture prevention among GC users, we identified several drugs that are likely to prevent osteoporotic fracture. Teriparatide, risedronate, and etidronate were associated with decreased vertebral fracture risk.
NASA Astrophysics Data System (ADS)
Ahn, Hyeon-Seon; Kidane, Tesfaye; Yamamoto, Yuhji; Otofuji, Yo-ichiro
2016-01-01
Palaeointensity variation is investigated for an inferred time period spanning from 2.34 to 1.96 Ma. Twenty-nine consecutive lava flows are sampled along cliffs 350 m high generated by normal faulting on the Dobi section of Afar depression, Ethiopia. Magnetostratigraphy and K-Ar measurements indicate a lava sequence of R-N-R-N geomagnetic field polarities in ascending order; the lower normal polarity is identified as the Réunion Subchron. Reliability of palaeomagnetic data is ascertained through careful thermal demagnetization and by the reversal test. The Tsunakawa-Shaw method yielded 70 successful palaeointensity results from 24 lava flows and gave 11 acceptable mean palaeointensities. Reliability in palaeointensity data is ascertained by the similar values obtained by the IZZI-Thellier method and thus 11 reliable mean values are obtained from our combined results. After the older reverse polarity with the field intensity of 19.6 ± 7.8 μT, an extremely low palaeointensity period with an average of 6.4 μT is shown to occur prior to the Réunion Subchron. During the Réunion Subchron, the dipole field strength is shown to have returned to an average of 19.5 μT, followed by second extreme low of 3.6 μT and rejuvenation with 17.1 ± 5.3 μT in the younger reverse polarity. This `W-shape' palaeointensity variation is characterized by occurrences of two extremely weak fields lower than 8 μT prior to and during the Réunion Subchron and a relatively weak time-averaged field of approximately 15 μT. This feature is also found in sedimentary cores from the Ontong Java Plateau and the north Atlantic, indicative of a possibly global geomagnetic field phenomenon rather than a local effect on Ethiopia. Furthermore, we estimate a weak virtual axial dipole moment of 3.66 (±1.85) × 1022 Am2 during early stage of the Matuyama Chron (inferred time period of 2.34-1.96 Ma).
Aligning a Receiving Antenna Array to Reduce Interference
NASA Technical Reports Server (NTRS)
Jongeling, Andre P.; Rogstad, David H.
2009-01-01
A digital signal-processing algorithm has been devised as a means of aligning (as defined below) the outputs of multiple receiving radio antennas in a large array for the purpose of receiving a desired weak signal transmitted by a single distant source in the presence of an interfering signal that (1) originates at another source lying within the antenna beam and (2) occupies a frequency band significantly wider than that of the desired signal. In the original intended application of the algorithm, the desired weak signal is a spacecraft telemetry signal, the antennas are spacecraft-tracking antennas in NASA s Deep Space Network, and the source of the wide-band interfering signal is typically a radio galaxy or a planet that lies along or near the line of sight to the spacecraft. The algorithm could also afford the ability to discriminate between desired narrow-band and nearby undesired wide-band sources in related applications that include satellite and terrestrial radio communications and radio astronomy. The development of the present algorithm involved modification of a prior algorithm called SUMPLE and a predecessor called SIMPLE. SUMPLE was described in Algorithm for Aligning an Array of Receiving Radio Antennas (NPO-40574), NASA Tech Briefs Vol. 30, No. 4 (April 2006), page 54. To recapitulate: As used here, aligning signifies adjusting the delays and phases of the outputs from the various antennas so that their relatively weak replicas of the desired signal can be added coherently to increase the signal-to-noise ratio (SNR) for improved reception, as though one had a single larger antenna. Prior to the development of SUMPLE, it was common practice to effect alignment by means of a process that involves correlation of signals in pairs. SIMPLE is an example of an algorithm that effects such a process. SUMPLE also involves correlations, but the correlations are not performed in pairs. Instead, in a partly iterative process, each signal is appropriately weighted and then correlated with a composite signal equal to the sum of the other signals.
Weighted integration of short-term memory and sensory signals in the oculomotor system.
Deravet, Nicolas; Blohm, Gunnar; de Xivry, Jean-Jacques Orban; Lefèvre, Philippe
2018-05-01
Oculomotor behaviors integrate sensory and prior information to overcome sensory-motor delays and noise. After much debate about this process, reliability-based integration has recently been proposed and several models of smooth pursuit now include recurrent Bayesian integration or Kalman filtering. However, there is a lack of behavioral evidence in humans supporting these theoretical predictions. Here, we independently manipulated the reliability of visual and prior information in a smooth pursuit task. Our results show that both smooth pursuit eye velocity and catch-up saccade amplitude were modulated by visual and prior information reliability. We interpret these findings as the continuous reliability-based integration of a short-term memory of target motion with visual information, which support modeling work. Furthermore, we suggest that saccadic and pursuit systems share this short-term memory. We propose that this short-term memory of target motion is quickly built and continuously updated, and constitutes a general building block present in all sensorimotor systems.
Science Literacy and Prior Knowledge of Astronomy MOOC Students
NASA Astrophysics Data System (ADS)
Impey, Chris David; Buxner, Sanlyn; Wenger, Matthew; Formanek, Martin
2018-01-01
Many of science classes offered on Coursera fall into fall into the category of general education or general interest classes for lifelong learners, including our own, Astronomy: Exploring Time and Space. Very little is known about the backgrounds and prior knowledge of these students. In this talk we present the results of a survey of our Astronomy MOOC students. We also compare these results to our previous work on undergraduate students in introductory astronomy courses. Survey questions examined student demographics and motivations as well as their science and information literacy (including basic science knowledge, interest, attitudes and beliefs, and where they get their information about science). We found that our MOOC students are different than the undergraduate students in more ways than demographics. Many MOOC students demonstrated high levels of science and information literacy. With a more comprehensive understanding of our students’ motivations and prior knowledge about science and how they get their information about science, we will be able to develop more tailored learning experiences for these lifelong learners.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-01
... enzymes involved in mycoparasitism, and weak growth at the temperature of the human body (37[deg]C)), and... information to human risk. EPA has also considered available information concerning the variability of the... hypersensitivity incidents, including immediate-type or delayed-type reactions of humans and domestic animals...
Methods for Remote Determination of CO2 Emissions
2011-01-01
support monitoring of compliance with international agreements. • It is difficult to predict when direct measurements of CO2 will yield useful emission...level of reasonable prior information, which is combined with the direct measurements to yield an emissions estimate. This prior information might...infrastructure of a country could yield a “proxy” estimate of CO2 emissions by assuming emission factors for various supply and demand sectors a
Wiczling, Paweł; Bartkowska-Śniatkowska, Alicja; Szerkus, Oliwia; Siluk, Danuta; Rosada-Kurasińska, Jowita; Warzybok, Justyna; Borsuk, Agnieszka; Kaliszan, Roman; Grześkowiak, Edmund; Bienert, Agnieszka
2016-06-01
The purpose of this study was to assess the pharmacokinetics of dexmedetomidine in the ICU settings during the prolonged infusion and to compare it with the existing literature data using the Bayesian population modeling with literature-based informative priors. Thirty-eight patients were included in the analysis with concentration measurements obtained at two occasions: first from 0 to 24 h after infusion initiation and second from 0 to 8 h after infusion end. Data analysis was conducted using WinBUGS software. The prior information on dexmedetomidine pharmacokinetics was elicited from the literature study pooling results from a relatively large group of 95 children. A two compartment PK model, with allometrically scaled parameters, maturation of clearance and t-student residual distribution on a log-scale was used to describe the data. The incorporation of time-dependent (different between two occasions) PK parameters improved the model. It was observed that volume of distribution is 1.5-fold higher during the second occasion. There was also an evidence of increased (1.3-fold) clearance for the second occasion with posterior probability equal to 62 %. This work demonstrated the usefulness of Bayesian modeling with informative priors in analyzing pharmacokinetic data and comparing it with existing literature knowledge.
Bayes-LQAS: classifying the prevalence of global acute malnutrition
2010-01-01
Lot Quality Assurance Sampling (LQAS) applications in health have generally relied on frequentist interpretations for statistical validity. Yet health professionals often seek statements about the probability distribution of unknown parameters to answer questions of interest. The frequentist paradigm does not pretend to yield such information, although a Bayesian formulation might. This is the source of an error made in a recent paper published in this journal. Many applications lend themselves to a Bayesian treatment, and would benefit from such considerations in their design. We discuss Bayes-LQAS (B-LQAS), which allows for incorporation of prior information into the LQAS classification procedure, and thus shows how to correct the aforementioned error. Further, we pay special attention to the formulation of Bayes Operating Characteristic Curves and the use of prior information to improve survey designs. As a motivating example, we discuss the classification of Global Acute Malnutrition prevalence and draw parallels between the Bayes and classical classifications schemes. We also illustrate the impact of informative and non-informative priors on the survey design. Results indicate that using a Bayesian approach allows the incorporation of expert information and/or historical data and is thus potentially a valuable tool for making accurate and precise classifications. PMID:20534159
Bayes-LQAS: classifying the prevalence of global acute malnutrition.
Olives, Casey; Pagano, Marcello
2010-06-09
Lot Quality Assurance Sampling (LQAS) applications in health have generally relied on frequentist interpretations for statistical validity. Yet health professionals often seek statements about the probability distribution of unknown parameters to answer questions of interest. The frequentist paradigm does not pretend to yield such information, although a Bayesian formulation might. This is the source of an error made in a recent paper published in this journal. Many applications lend themselves to a Bayesian treatment, and would benefit from such considerations in their design. We discuss Bayes-LQAS (B-LQAS), which allows for incorporation of prior information into the LQAS classification procedure, and thus shows how to correct the aforementioned error. Further, we pay special attention to the formulation of Bayes Operating Characteristic Curves and the use of prior information to improve survey designs. As a motivating example, we discuss the classification of Global Acute Malnutrition prevalence and draw parallels between the Bayes and classical classifications schemes. We also illustrate the impact of informative and non-informative priors on the survey design. Results indicate that using a Bayesian approach allows the incorporation of expert information and/or historical data and is thus potentially a valuable tool for making accurate and precise classifications.
Deficits in voice and multisensory processing in patients with Prader-Willi syndrome.
Salles, Juliette; Strelnikov, Kuzma; Carine, Mantoulan; Denise, Thuilleaux; Laurier, Virginie; Molinas, Catherine; Tauber, Maïthé; Barone, Pascal
2016-05-01
Prader-Willi syndrome (PWS) is a rare neurodevelopmental and genetic disorder that is characterized by various expression of endocrine, cognitive and behavioral problems, among which a true obsession for food and a deficit of satiety that leads to hyperphagia and severe obesity. Neuropsychological studies have reported that PWS display altered social interactions with a specific weakness in interpreting social information and in responding to them, a symptom closed to that observed in autism spectrum disorders (ASD). Based on the hypothesis that atypical multisensory integration such as face and voice interactions would contribute in PWS to social impairment we investigate the abilities of PWS to process communication signals including the human voice. Patients with PWS recruited from the national reference center for PWS performed a simple detection task of stimuli presented in an uni-o or bimodal condition, as well as a voice discrimination task. Compared to control typically developing (TD) individuals, PWS present a specific deficit in discriminating human voices from environmental sounds. Further, PWS present a much lower multisensory benefits with an absence of violation of the race model indicating that multisensory information do not converge and interact prior to the initiation of the behavioral response. All the deficits observed in PWS were stronger for the subgroup of patients suffering from Uniparental Disomy, a population known to be more sensitive to ASD. Altogether, our study suggests that the deficits in social behavior observed in PWS derive at least partly from an impairment in deciphering the social information carried by voice signals, face signals, and the combination of both. In addition, our work is in agreement with the brain imaging studies revealing an alteration in PWS of the "social brain network" including the STS region involved in processing human voices. Copyright © 2016 Elsevier Ltd. All rights reserved.
Menstrual characteristics amongst south-eastern Nigerian adolescent school girls.
Adinma, E D; Adinma, J I B
2009-03-01
Information on pattern of menstruation and its implications is lacking amongst adolescents in Nigeria. To examine the characteristics of menstruation amongst adolescent Igbo school girls with respect to the biosocial characteristics, the pattern of menstruation, associated complications, and the source of information on menstruation. A descriptive cross-sectional study of 550 students recruited from a multi-sampling of 50 secondary schools in Onitsha, Anambra State, Nigeria, using pre-tested, semistructured, and interviewer administered questionnaires. Four hundred and sixteen (75.6%) respondents were aged 15-17 years; 338 (61.4%) of whom were Catholics. Menarcheal age range of respondents was 11-16 years, with a mean age of 13.40 +/- 1.15 years. Menstruation was regular in 410 (74.5%), and irregular in 124 (22.5%) of respondents. Duration of menstrual flow ranged between two and eight days, although a four-day flow occurred most commonly, 268 (53.6%). Abdominal pain, (66.2%), and waist pain, (38.5%), constituted the major problems associated with menstruation, followed by depression, (24.4%); vomiting, (6.9%); school absenteeism, (4.5%); anorexia, (1.8%); weakness, (1.5%); and increased appetite, (1.1%). The commonest source of information on menstruation (prior to menarche) amongst respondents was from the mother, 48.4%, followed by elder sister, and friends --14.2%, and 8.7% respectively, while the teacher constituted the least source, 1.1%. The characteristics of menstruation in this study do not differ considerably from what obtains amongst other adolescent girls. Associated complications may have profound psychosocial impact on the growing adolescent girl, requiring address, best achieved through the empowerment of mothers and teachers under a comprehensive family life education scheme.
Dispositional Optimism and Therapeutic Expectations in Early Phase Oncology Trials
Jansen, Lynn A.; Mahadevan, Daruka; Appelbaum, Paul S.; Klein, William MP; Weinstein, Neil D.; Mori, Motomi; Daffé, Racky; Sulmasy, Daniel P.
2016-01-01
Purpose Prior research has identified unrealistic optimism as a bias that might impair informed consent among patient-subjects in early phase oncology trials. Optimism, however, is not a unitary construct – it can also be defined as a general disposition, or what is called dispositional optimism. We assessed whether dispositional optimism would be related to high expectations for personal therapeutic benefit reported by patient-subjects in these trials but not to the therapeutic misconception. We also assessed how dispositional optimism related to unrealistic optimism. Methods Patient-subjects completed questionnaires designed to measure expectations for therapeutic benefit, dispositional optimism, unrealistic optimism, and the therapeutic misconception. Results Dispositional optimism was significantly associated with higher expectations for personal therapeutic benefit (Spearman r=0.333, p<0.0001), but was not associated with the therapeutic misconception. (Spearman r=−0.075, p=0.329). Dispositional optimism was weakly associated with unrealistic optimism (Spearman r=0.215, p=0.005). In multivariate analysis, both dispositional optimism (p=0.02) and unrealistic optimism (p<0.0001) were independently associated with high expectations for personal therapeutic benefit. Unrealistic optimism (p=.0001), but not dispositional optimism, was independently associated with the therapeutic misconception. Conclusion High expectations for therapeutic benefit among patient-subjects in early phase oncology trials should not be assumed to result from misunderstanding of specific information about the trials. Our data reveal that these expectations are associated with either a dispositionally positive outlook on life or biased expectations about specific aspects of trial participation. Not all manifestations of optimism are the same, and different types of optimism likely have different consequences for informed consent in early phase oncology research. PMID:26882017
Dispositional optimism and therapeutic expectations in early-phase oncology trials.
Jansen, Lynn A; Mahadevan, Daruka; Appelbaum, Paul S; Klein, William M P; Weinstein, Neil D; Mori, Motomi; Daffé, Racky; Sulmasy, Daniel P
2016-04-15
Prior research has identified unrealistic optimism as a bias that might impair informed consent among patient-subjects in early-phase oncology trials. However, optimism is not a unitary construct; it also can be defined as a general disposition, or what is called dispositional optimism. The authors assessed whether dispositional optimism would be related to high expectations for personal therapeutic benefit reported by patient-subjects in these trials but not to the therapeutic misconception. The authors also assessed how dispositional optimism related to unrealistic optimism. Patient-subjects completed questionnaires designed to measure expectations for therapeutic benefit, dispositional optimism, unrealistic optimism, and the therapeutic misconception. Dispositional optimism was found to be significantly associated with higher expectations for personal therapeutic benefit (Spearman rank correlation coefficient [r], 0.333; P<.0001), but was not associated with the therapeutic misconception (Spearman r, -0.075; P = .329). Dispositional optimism was found to be weakly associated with unrealistic optimism (Spearman r, 0.215; P = .005). On multivariate analysis, both dispositional optimism (P = .02) and unrealistic optimism (P<.0001) were found to be independently associated with high expectations for personal therapeutic benefit. Unrealistic optimism (P = .0001), but not dispositional optimism, was found to be independently associated with the therapeutic misconception. High expectations for therapeutic benefit among patient-subjects in early-phase oncology trials should not be assumed to result from misunderstanding of specific information regarding the trials. The data from the current study indicate that these expectations are associated with either a dispositionally positive outlook on life or biased expectations concerning specific aspects of trial participation. Not all manifestations of optimism are the same, and different types of optimism likely have different consequences for informed consent in early-phase oncology research. © 2016 American Cancer Society.
Davis, Matthew H.
2016-01-01
Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further. The present work is aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. By combining behavioural, univariate, and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception with computational modelling, we provide evidence in favour of Prediction Error computations. Increased sensory detail and informative expectations have additive behavioural and univariate neural effects because they both improve the accuracy of word report and reduce the BOLD signal in lateral temporal lobe regions. However, sensory detail and informative expectations have interacting effects on speech representations shown by multivariate fMRI in the posterior superior temporal sulcus. When prior knowledge was absent, increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns, but with informative expectations, increased sensory detail reduced the amount of measured information. Computational simulations of Sharpened Signals and Prediction Errors during speech perception could both explain these behavioural and univariate fMRI observations. However, the multivariate fMRI observations were uniquely simulated by a Prediction Error and not a Sharpened Signal model. The interaction between prior expectation and sensory detail provides evidence for a Predictive Coding account of speech perception. Our work establishes methods that can be used to distinguish representations of Prediction Error and Sharpened Signals in other perceptual domains. PMID:27846209
Internet Freedom and Political Space
2013-01-01
information across a wide range of different commu- nities. Thus, weak ties can deliver information in communities that are not interlinked with each... information .77 Such a top-down communication left little opportu- nity for grassroots input into the movement’s goals and its development of...Republic of Egypt, Ministry of Communications and Information Technologies, The Future of Internet Economy in Egypt: A Statistical Profile, May 2011
78 FR 56242 - Agency Information Collection Activities: Prior Disclosure
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-12
... information (total capital/startup costs and operations and maintenance costs). The comments that are... information collected. Type of Review: Extension (without change). Affected Public: Businesses. Estimated...
NASA Astrophysics Data System (ADS)
Molnar, Nicolas; Cruden, Alexander
2017-04-01
Propagating rifts are a natural consequence of lithospheric plates that diverge with respect to each other about a pole of rotation. This process of "unzipping" is common in the geological record, but how rifts interact with pre-existing structures (i.e., with a non-homogeneous lithosphere) as they propagate is poorly understood. Here we report on a series of lithospheric-scale three-dimensional analogue experiments of rotational extension with in-built, variably oriented linear weak zones in the lithospheric mantle, designed to investigate the role that inherited structural or thermal weaknesses play in the localisation of strain and rifting. Surface strain and dynamic topography in the analogue models are quantified by high-resolution particle imaging velocimetry and digital photogrammetry, which allows us to characterise the spatio-temporal evolution of deformation as a function of the orientation of the linear heterogeneities in great detail. The results show that the presence of a linear zone of weakness oriented at low angles with respect to the rift axis (i.e., favourably oriented) produces strain localisation in narrow domains, which enhances the "unzipping" process prior to continental break up. Strong strain partitioning is observed when the linear heterogeneity is oriented at high angles with respect to the rift axis (i.e., unfavourably oriented). In these experiments, early sub-parallel V-shaped basins propagate towards the pole of rotation until they are abandoned and strain is transferred entirely to structures developed in the vicinity of the strongly oblique weak lithosphere zone boundary. The modelling also provides insights on how propagating rift branches that penetrate the weak linear zone boundary are aborted when strain is relayed onto structures that develop in rheologically weaker areas. The experimental results are summarised in terms of their evolution, patterns of strain localisation, and dynamic topography as a function of the lithospheric heterogeneity obliquity angle, and compared to ancient and modern examples in nature.