Model for Estimating and Testing the Maximum Probability

The testing approach for the capital asset pricing model (CAPM) is based on probability theory testing, assuming that the distribution is normal. In these conditions, we can obtain some characteristics that we want, which will ensure that the estimation that will be made is based on a maximum probability that it respects. In some cases we can test the hypothesis that, using the text t, the diagonal element is the one that interests. Usually, the Lagrange multiplier or the score test is based on the residual vector of the restricted estimators. Applying the null hypothesis in this case restricts only the intercept parameters, so that the score test can be specified or expressed clearly, regarding the ratio that exists between the parameters we calculated. Normally distributed, the data series can be and use the critical value of this distribution. This test is usually called GRS after Gibbons, Ross and Shanken, the three who founded this relationship in 1989. As for the quantity determined, which represents the average price error, it can be interpreted in terms of apparently exploitable yields. In the context of the CAPM, the three statistics should have a tendency towards zero, so that it satisfies the correlation criterion assumed and tested. It is found that some of the investment weights of the tangent portfolio may be negative. Moreover, as the number of assets could increase, it is empirically specified that the weight of assets corresponding to the portfolio with negative weights is approaching 50, that is precisely the balance and the correlation between these two estimators.


Introduction
This article starts from the fact that the estimation and testing of the maximum likelihood to be used in the case of the capital asset pricing model (CAPM) must be determined. It discusses the conditions that such a test must meet, being a common practice and used in regression models without dynamic effects, starting from the fact that the marginal distribution of profitability does not contain information about the parameters we are interested in and which describes a distribution conditional on returns.
Thus, a number of problems arise in terms of market profitability, so that the weighted sum of the resulting variables is different from one another. This is what happens if we use portfolios in the context of a general market portfolio. This situation is explained by using estimates of the market model based on daily data in one month, in which we obtain the respective results and then the estimates of the market model based on the monthly data, using the relationships agreed by Zellner (1962).
By overlapping the two daily and monthly situations, then the Gaussian probability for the vectors established conditionally in determining the market is given by a relation that includes all the elements (statistical variables), which are relevant in this study.
The following is the situation for the alternative collection of the equation of parameters by another equation, that is, the establishment of a matrix containing the respective data mentioned above. According to the assumption of normality, we will condition the excessive returns of the distributed market through a relation that expresses that the estimators are linear combinations of the error terms distributed but normally after the OLS function.
A series of hypothetical situations are presented, in which the data are used in such a way that the estimation and testing are always done with a maximum probability, which ensures a degree of certainty, a high level of confidence of the obtained results. Thus, one can compare the critical value of the distribution, if it is believed that the distribution is normal, with a standard distribution if it is based on large samples and thus the test used will have power against the alternative that we can calculate statistical data, but will have to we adjust the significance level for the probability of multiple testing. This means that the relationships we use and the tests we perform provide maximum likelihood.
Berndt and Savin in 1977 demonstrated that in finite samples, which is stated by Wald statistics, should be rejected, while the LM test is least likely to lead to the rejection of the hypothesis we arrived at.
The following is a series of issues related to critical value, the difference between tests that occur when using asymptotic critical values and so on, including a critical analysis of the CAPM, which does not always lead to the desired results. References are made to a number of researchers who have experimented and pointed out the usefulness of the CAPM, where all three statistics should be approximately zero that is, leading to the same results.
In this article we have used some data that are significant in terms of demonstrating the need for estimation and maximum testing of the probability with which we guarantee the obtained results.

Methodology of research. Results and discussions
The classical approach to testing for the capital asset pricing model (CAPM) is based on probability theory testing, under the assumption of normality. In this setting simple exact tests of the null hypothesis can be obtained. However, the evidence of stock market returns is weak, even on the monthly horizon. In this case, the tests presented are all valid asymptotically, as long as the size of the sample T is sufficiently large under certain weaker conditions of the data generation process. In this approach, it is natural to condition the market returns as random. This is a common practice in regression models without dynamic effects and is based on the concept of accessibility, which describes the conditional distribution of individual returns conditioned by market profitability. This appears as a paradox, as the market returns are themselves composed of individual returns. Thus, the conditioning of the weighted amount of the variables seems to be unnatural when the chosen assets have a relatively large share of the market portfolio.
If and the excess revenue vector, the market model in vector notation, is of the form: , Where, it has residual significance, and again și being vectors with unknown parameters.
The covariance matrix matrix it is not restricted, that is, it allows the correlation between idiosyncratic errors. It is common practice to assume that idiosyncratic terms are not correlated, in which case, is diagonal. In some works, the authors use overlapping portfolios built from anomalies and this would induce correlation with a certain error even when the underlying assets are idiosyncratic. This regression, apparently unrelated in Zellner's (1962) terminology with the same regressors, is an interception of market yield. Suppose they were satisfied, then the Gaussian probability for the vectors observe conditionally on the return of the market gets: (2) where c is a constant (constant parameter), which does not depend on uncertain parameters.

Maximum probability estimates
represents the OLS-type equation, because market efficiency and interception are common regressors in each equation. For results: (3) Which can be written in vector form, respectively: (4) (5) In the above relationship, the maximum likelihood estimate is given by the relationship: (6) where We can transform the parameter equation into another equation, that is be the matrix cu ; X be the matrix which contains an observation column on și , be a vector containing the observations . Then equality is established and we can write in the system rating the estimator of that, Where is the matrix , which contains excess returns or returns from individual assets.
We also leave și be the vector of the parameters (estimators) in this ordering and relationships (4), (5) și (7) are useful. According to the assumption of normality, we condition the excessive returns of the market by the exact distributions: Because estimators are linear combinations of normally distributed error terms. Also, and are correlated with , which is negative when the average excess yield is positive.

Its distribution
It is and the random variable of the matrix Wishart distribution follows.
We can deduce where is the Kronecker product of two matrices, respectively We can test the hypothesis that față de folosind testul t Where is the corresponding diagonal element of . We will compare with the critical value in the distribution whether it is considered the normal distribution or a standard normal distribution, if it is based on large samples. This test will be significant compared to the alternative .
All statistics can be calculated but, then, we will have to adjust the significance level for the multiple testing case.
Wald test statistic, for testing the common null hypothesis that , it is in shape: (11) Where is the MLE defined above. In large samples, when , distributed approximately under the null hypothesis. Under the alternative hypothesis , the test has significance over all alternatives. The Lagrange multiplier, or score test, is based on the residual vector of the restricted estimators. MLE restricted by is the OLS estimator without interception, having the form: (12) Maximum likelihood estimation a gets: (13) where . The null hypothesis, in this case, only restricts the intercept parameters, so that the score test can be expressed in relation to these parameters. The scoring function is proportional to the vector of the reduced residuals, (14) Which is normally distributed with zero and with a calculated average variation. The LM test statistic becomes: (15) Where . Under the null hypothesis, this is approx with . The yield ratio test (LR) is based on a comparison of the residual squared sum of the restricted model with the unrestricted model, This statistic is specific, in large samples, under the null hypothesis.
Berndt and Savin (1977) state that , in finite samples, which expresses that Wald statistics are most likely to be rejected, while the LM test is least likely to be rejected, when using critical values asymptotic. In fact, in the hypothesis of normality there is a variant of the finite sample of Wald test statistics using the known Wishart distribution of , which leads to the relationship: F is distributed exactly as, , and thus we can perform the test using the critical value in this distribution. This test is often referred to as the GRS test after (Gibbons et al., 1989). This makes the standard F regression test, except that there we test whether the covariates are significant in common, to also test whether an intercept vector is significant.
Another important perspective due to GRS is that F is proportional to the difference between the Sharpe ratios in the market portfolio m and the effective TP tangency portfolio, i.e., This has a useful graphical interpretation in terms of investment theory. The three probability tests have an exact relationship with the F statistic, so we can write: It turns out that yes is the level , critical value for F, if used as a critical value for LR the same results will be obtained. The difference between tests appears only when critical asymptotic values are used. All three tests are compatible with all alternatives for which .
All these joint tests require , because otherwise the matrix is not reversible. The CRSP database contains many thousands of individual data, so this methodology cannot be applied to all databases directly. Alternatively, one can work with subsets of assets and perform joint tests of intercepts from subsets. Because estimation is equation by equation, there is no bad consequence from ignoring the other equations. Instead, authors typically work with data portfolios that are built using various criteria. The widely used approach in practice is to form a relatively small number (N = 20), from the portfolios of all assets and applying the above methodology to this smaller number of securities. Wald statistics aggregates information on all assets and should therefore provide a stronger CAPM test than individual t tests. However, when N is high or even moderately high, this approach faces some problems. In the extreme case when N>T, the sample covariance matrix is of poor rank, as already discussed, which makes the Wald statistic not clearly defined. Instead, the common practice is to present statistics such as: represents the average price error and can be interpreted in terms of apparently exploitable returns. According to the CAPM, all three statistics should be approximately zero. According to the alternative hypothesis, the second and third statistics will tend to infinity. The former may not do this when some and other leading to . We can calculate the average and the variation of these quantities using the normal limit distribution (with ) and the results of Magnus and Neudecker (1988) for the moments of the quadratic forms in the normal random variables. For we use results about the normal folded distribution (Psarakis and Panaretos 2001). In some additional conditions, these quantities are approximately normal when both N and T are large, provided their null value is reduced. Adjusted average statistics S1, S2, S3 they are form: show that under the null hypothesis. In general, it is found that some of the investment weights of the tangent portfolio are negative. Moreover, as the number of assets increases, it is empirically shown that the percentage of assets corresponding to the portfolio with negative weights is approaching 50%. These findings apparently contradict the CAPM, because in order to guarantee the balance the investment weights of the tangent portfolio must be all positive. In addition, if the majority of investors, in practice, mainly choose a portfolio with positive weights, it implies that they do not choose by the MV rule, because selecting an optimal portfolio according to the MV rule produces many negative weights for investments. Therefore, the existence of negative weights implies that, in practice, investments are not selected by the MV rule; therefore, CAPM is not valid. However, Levy and Roll (2010) show that this can be rationalized as a result of the estimation error, that is, within a 95% confidence interval for the estimated weights, weights can be found that satisfy the non-negativity property.

Conclusions
From the article presented on the basis of a study by the authors, a series of practical and theoretical conclusions are drawn. First of all, the market model estimates must be made on a daily basis and on a monthly basis. Also, a system of equations must be used, which will lead to obtaining estimators capable of ensuring the estimation of future developments with the highest probability.
This way of dealing with the aforementioned issues is based on a mathematical study of the computational relationships used and by applying some data; we arrive at some convenient solutions to ensure that the estimation and testing is done with a maximum probability, that is, with a level of trust as high as possible. For example, the Lagrange multiplier or the score test is based on the residual vector of the restricted estimators. It follows that the data obtained, tested, are correlated and give a high probability estimate which ensures a high degree of confidence.
It is clear from the statistics presented in this article that they should be used in the study of market portfolios, but in close correlation with the variation that appears using the normal limit distribution or for some moments of the quadratic forms of the normal random variables.
A final conclusion is that in the analysis of portfolios, in the context of the capital market, we must always consider estimation and testing of the maximum probability with which the respective results are guaranteed.