Liquidity and earnings in event studies: Does data granularity matter?
- Publication Type:
- Journal Article
- Citation:
- Pacific Basin Finance Journal, 2019, 54 pp. 118 - 131
- Issue Date:
- 2019-04-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
1-s2.0-S0927538X18304633-main.pdf | Published Version | 843.32 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2019 Elsevier B.V. Market microstructure data availability has significantly improved over time and it is now possible to estimate liquidity measures at the nanosecond level. However, this level of data is unavailable in all markets and time periods and there is a significant cost and computational burden of high-frequency data. Goyenko et al. (2009) and Fong et al. (2017) show that various low-frequency liquidity measures can proxy for high-frequency benchmarks and show that the results are robust across countries and time. However, liquidity measures do not always behave in the expected fashion during periods of information asymmetry (Collin-Dufresne and Fos, 2015). Drawing from Ball and Brown (1968), we use an event study methodology to investigate whether the low-frequency measures of liquidity can proxy for high-frequency measures around earnings announcements (i.e., periods of information asymmetry). We find that the Closing-Price-Quoted-Spread is the best proxy for the percent-cost high-frequency benchmarks. In contrast, using cross-sectional, portfolio and individual time-series correlations the most consistent low-frequency cost-per-dollar proxies are the High-Low-Impact and Closing-Price-Quoted-Spread-Impact, however, the performance of these proxies weakens in the pre- and post-announcement periods around the earnings announcement.
Please use this identifier to cite or link to this item: