{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## ERA-5 Data and Graphics Preparation for the Science-on-a-Sphere" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview:\n", "\n", "1. Programmatically request a specific date from the [RDA ERA-5 THREDDS repository](https://rda.ucar.edu/thredds/catalog/files/g/ds633.0/catalog.html)\n", "1. Re-arrange the order of the longitudinal dimension in the dataset\n", "1. Use Cartopy's `add_cyclic_point` method to avoid a blank seam at the dateline\n", "1. Create 24 hours worth of Science-on-a-Sphere-ready plots\n", "1. Create a set of labels for each plot" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prerequisites\n", "\n", "| Concepts | Importance | Notes |\n", "| --- | --- | --- |\n", "| Xarray | Necessary | |\n", "\n", "* **Time to learn**: 30 minutes\n", "***" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Imports" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import xarray as xr\n", "import numpy as np\n", "import pandas as pd\n", "from pandas.tseries.offsets import MonthEnd,MonthBegin\n", "import cartopy.feature as cfeature\n", "import cartopy.crs as ccrs\n", "import cartopy.util as cutil\n", "import requests\n", "from datetime import datetime, timedelta\n", "import warnings\n", "import metpy.calc as mpcalc\n", "from metpy.units import units\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "warnings.simplefilter(\"ignore\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Programmatically request a specific date from the [RDA ERA-5 THREDDS repository](https://rda.ucar.edu/thredds/catalog/files/g/ds633.0/catalog.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next several cells set the date/time, creates the URL strings, and retrieves selected grids from RDA." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Select your date and time here\n", "year = 1978\n", "month = 1\n", "day = 26\n", "hour = 0" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", " One-day limit:
\n", " While Xarray can create datasets and data arrays from multiple files, via its open_mfdataset method, this method does not reliably work when reading from a THREDDS server such as RDA's. Therefore, please restrict your temporal range to no more than one calendar day in this notebook.\n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "validTime = datetime(year, month, day, hour)\n", "YYYYMM = validTime.strftime(\"%Y%m\")\n", "YYYYMMDD = validTime.strftime(\"%Y%m%d\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Among the many tools for time series analysis that Pandas provides are functions to determine the first and last days of a month." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "monthEnd = pd.to_datetime(YYYYMM, format='%Y%m') + MonthEnd(0)\n", "mEndStr = monthEnd.strftime('%Y%m%d23')\n", "\n", "monthBeg = pd.to_datetime(YYYYMM, format='%Y%m') + MonthBegin(0)\n", "mBegStr = monthBeg.strftime('%Y%m%d00')\n", "\n", "dayBegStr = YYYYMMDD + '00'\n", "dayEndStr = YYYYMMDD + '23'\n", "\n", "dayBeg = datetime.strptime(dayBegStr, '%Y%m%d%H')\n", "dayEnd = datetime.strptime(dayEndStr, '%Y%m%d%H')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dayBeg, dayEnd" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "monthBeg, monthEnd" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create the URLs for the various grids." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "SfcURL = []\n", "SfcURL.append('https://rda.ucar.edu/thredds/dodsC/files/g/ds633.0/e5.oper.an.')\n", "SfcURL.append(YYYYMM + '/e5.oper.an')\n", "SfcURL.append('.ll025')\n", "SfcURL.append('.' + mBegStr + '_' + mEndStr + '.nc')\n", "print(SfcURL)\n", "\n", "PrsURL = []\n", "PrsURL.append('https://rda.ucar.edu/thredds/dodsC/files/g/ds633.0/e5.oper.an.')\n", "PrsURL.append(YYYYMM + '/e5.oper.an')\n", "PrsURL.append('.ll025')\n", "PrsURL.append('.' + dayBegStr + '_' + dayEndStr + '.nc' )\n", "print(PrsURL)\n", "\n", "ds_urlZ = PrsURL[0] + 'pl/' + PrsURL[1] + '.pl.128_129_z' + PrsURL[2] + 'sc' + PrsURL[3]\n", "ds_urlQ = PrsURL[0] + 'pl/' + PrsURL[1] + '.pl.128_133_q' + PrsURL[2] + 'sc' + PrsURL[3]\n", "ds_urlT = PrsURL[0] + 'pl/' + PrsURL[1] + '.pl.128_130_t' + PrsURL[2] + 'sc' + PrsURL[3]\n", "# Notice that U and V use \"uv\" instead of \"sc\" in their name!\n", "ds_urlU = PrsURL[0] + 'pl/' + PrsURL[1] + '.pl.128_131_u' + PrsURL[2] + 'uv' + PrsURL[3]\n", "ds_urlV = PrsURL[0] + 'pl/' + PrsURL[1] + '.pl.128_132_v' + PrsURL[2] + 'uv' + PrsURL[3]\n", "\n", "ds_urlU10 = SfcURL[0] + 'sfc/' + SfcURL[1] + '.sfc.128_165_10u' + SfcURL[2] + 'sc' + SfcURL[3]\n", "ds_urlV10 = SfcURL[0] + 'sfc/' + SfcURL[1] + '.sfc.128_166_10v' + SfcURL[2] + 'sc' + SfcURL[3]\n", "ds_urlSLP = SfcURL[0] + 'sfc/' + SfcURL[1] + '.sfc.128_151_msl' + SfcURL[2] + 'sc' + SfcURL[3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
NOTE: You can find tables listing all of the various parameters in the ERA-5 datasets at this link
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Use your RDA credentials to access these individual NetCDF files. You should have previously registered with RDA, and then saved your user id and password in your home directory as .rdarc , with permissions such that only you have read/write access." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#Retrieve login credential for RDA.\n", "from pathlib import Path\n", "HOME = str(Path.home())\n", "credFile = open(HOME+'/.rdarc','r')\n", "userId, pw = credFile.read().split()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Connect to the RDA THREDDS server and point to the files of interest. Here, we'll comment out the ones we are not interested in but can always uncomment when we want them. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "session = requests.Session()\n", "session.auth = (userId, pw)\n", "#storeU10 = xr.backends.PydapDataStore.open(ds_urlU10, session=session)\n", "#storeV10 = xr.backends.PydapDataStore.open(ds_urlV10, session=session)\n", "storeSLP = xr.backends.PydapDataStore.open(ds_urlSLP, session=session)\n", "#storeQ = xr.backends.PydapDataStore.open(ds_urlQ, session=session)\n", "#storeT = xr.backends.PydapDataStore.open(ds_urlT, session=session)\n", "#storeU = xr.backends.PydapDataStore.open(ds_urlU, session=session)\n", "#storeV = xr.backends.PydapDataStore.open(ds_urlV, session=session)\n", "storeZ = xr.backends.PydapDataStore.open(ds_urlZ, session=session)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ds_urlSLP" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#dsU10 = xr.open_dataset(storeU10)\n", "#dsV10 = xr.open_dataset(storeV10)\n", "dsSLP = xr.open_dataset(storeSLP)\n", "#dsQ = xr.open_dataset(storeQ)\n", "#dsT = xr.open_dataset(storeT)\n", "#dsU = xr.open_dataset(storeU)\n", "#dsV = xr.open_dataset(storeV)\n", "dsZ = xr.open_dataset(storeZ)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Examine the datasets" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The surface fields span the entire month. Let's restrict the range so it matches the day of interest." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dsSLP = dsSLP.sel(time=slice(dayBeg,dayEnd))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create an Xarray DataArray object for SLP and perform unit conversion" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "SLP = dsSLP['MSL']\n", "SLP = SLP.metpy.convert_units('hPa')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Examine the SLP DataArray" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "SLP" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now examine the geopotential data on isobaric surfaces." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dsZ" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To reduce the memory footprint for data on isobaric surfaces, let's request only 6 out of the 37 available levels." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dsZ = dsZ.sel(level = [200,300,500,700,850,1000])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dsZ" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create an Xarray DataArray object for Z" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Z = dsZ['Z']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Z" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Set contour levels for both SLP and Z. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "slpLevels = np.arange(864,1080,4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Choose a particular isobaric surface." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pLevel = 500" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Redefine Z so it consists only of the data at the level you just chose." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Z = Z.sel(level=pLevel)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Science-on-a-Sphere requires that projected images, besides using Plate Carree projection, extend from -180 to +180 degrees longitude. The ERA-5 datasets' longitudinal coordinates start at 0 and go to 360. Define a function that re-sorts datasets (or data arrays) so the longitudinal range begins at the dateline instead of the Greenwich meridian." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def reSortCoords(d):\n", " longitudeName = 'longitude'\n", " d.coords[longitudeName] = (d.coords[longitudeName] + 180) % 360 - 180\n", " d = d.sortby(d[longitudeName])\n", " return d" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Z = reSortCoords(Z)\n", "SLP = reSortCoords(SLP)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Examine one of the re-sorted data arrays" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "SLP" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the purposes of plotting geopotential heights in decameters, choose an appropriate contour interval and range of values ... for geopotential heights, a common convention is: from surface up through 700 hPa: 3 dam; above that, 6 dam to 400 and then 9 or 12 dam from 400 and above." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if (pLevel == 1000):\n", " zLevels= np.arange(-90,63, 3)\n", "elif (pLevel == 850):\n", " zLevels = np.arange(60, 183, 3)\n", "elif (pLevel == 700):\n", " zLevels = np.arange(201, 339, 3)\n", "elif (pLevel == 500):\n", " zLevels = np.arange(468, 606, 6)\n", "elif (pLevel == 300):\n", " zLevels = np.arange(765, 1008, 9)\n", "elif (pLevel == 200): \n", " zLevels = np.arange(999, 1305, 9)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
In this notebook, we did not retrieve the U- and V- wind component grids. However, if you end up modifying this notebook to do so, uncomment the lines in the following cell.
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#print (U.units)\n", "## Convert wind from m/s to knots; use MetPy to calculate the wind speed.\n", "#UKts = U.metpy.convert_units('knots')\n", "#VKts = V.metpy.convert_units('knots')\n", "## Calculate windspeed.\n", "#wspdKts = calc.wind_speed(U,V).to('knots')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create objects for the relevant coordinate arrays; in this case, *longitude*, *latitude*, and *time*." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lons, lats, times= SLP.longitude, SLP.latitude, SLP.time" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Take a peek at a couple of these coordinate arrays." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lons" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
Notice that the longitudinal array extends to 179.75, not 180. This will prove to be a problem!
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "times" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create contour plots of SLP and geopotential height." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let's create a single plot, for the first hour of the day. After we have perfected the plot, we will use a `for` loop to produce a graphic for all 24 hours of our chosen day." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "proj_data = ccrs.PlateCarree() # The dataset's x- and y- coordinates are lon-lat" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "res = '110m'\n", "fig = plt.figure(figsize=(18,12))\n", "ax = plt.subplot(1,1,1,projection=ccrs.PlateCarree())\n", "ax.set_global()\n", "ax.add_feature(cfeature.COASTLINE.with_scale(res))\n", "ax.add_feature(cfeature.STATES.with_scale(res))\n", "\n", "# Add the title\n", "valTime = pd.to_datetime(SLP.time[0].values).strftime(\"%m/%d/%Y at %H UTC\")\n", "tl1 = str(\"ERA-5 SLP and \" + str(pLevel) + \"hPa Z, \" + valTime)\n", "plt.title(tl1,fontsize=16)\n", "\n", "# Height (Z) contour fills\n", "\n", "# Convert from units of geopotential to height in decameters.\n", "Zdam = mpcalc.geopotential_to_height(Z.isel(time=0)).metpy.convert_units('dam')\n", "CF = ax.contourf(lons,lats,Zdam,levels=zLevels,cmap=plt.get_cmap('viridis'), extend='both')\n", "cbar = fig.colorbar(CF,shrink=0.5)\n", "cbar.set_label(r'Heights (dam)', size='medium')\n", " \n", "# SLP contour lines\n", "CL = ax.contour(lons,lats,SLP.isel(time=0),slpLevels,linewidths=1.25,colors='chocolate')\n", "ax.clabel(CL, inline_spacing=0.2, fontsize=8, fmt='%.0f'); " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "At first glance, this looks good. But let's re-center the map at the international dateline instead of the Greenwich meridian." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "res = '110m'\n", "fig = plt.figure(figsize=(18,12))\n", "ax = plt.subplot(1,1,1,projection=ccrs.PlateCarree(central_longitude=180))\n", "ax.set_global()\n", "ax.add_feature(cfeature.COASTLINE.with_scale(res))\n", "ax.add_feature(cfeature.STATES.with_scale(res))\n", "\n", "# Add the title\n", "valTime = pd.to_datetime(SLP.time[0].values).strftime(\"%m/%d/%Y at %H UTC\")\n", "tl1 = str(\"ERA-5 SLP and \" + str(pLevel) + \"hPa Z, \" + valTime)\n", "plt.title(tl1,fontsize=16)\n", "\n", "# Height (Z) contour fills\n", "\n", "# Convert from units of geopotential to height in decameters.\n", "Zdam = mpcalc.geopotential_to_height(Z.isel(time=0)).metpy.convert_units('dam')\n", "CF = ax.contourf(lons,lats,Zdam,levels=zLevels,cmap=plt.get_cmap('viridis'), extend='both')\n", "cbar = fig.colorbar(CF,shrink=0.5)\n", "cbar.set_label(r'Heights (dam)', size='medium')\n", " \n", "# SLP contour lines\n", "CL = ax.contour(lons,lats,SLP.isel(time=0),slpLevels,linewidths=1.25,colors='chocolate')\n", "ax.clabel(CL, inline_spacing=0.2, fontsize=8, fmt='%.0f'); " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That still looks good, right? Let's now try plotting on an orthographic projection, somewhat akin to how it would look on the sphere." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "res = '110m'\n", "fig = plt.figure(figsize=(18,12))\n", "ax = plt.subplot(1,1,1,projection=ccrs.Orthographic(central_longitude=180))\n", "ax.set_global()\n", "ax.add_feature(cfeature.COASTLINE.with_scale(res))\n", "ax.add_feature(cfeature.STATES.with_scale(res))\n", "\n", "# Add the title\n", "valTime = pd.to_datetime(SLP.time[0].values).strftime(\"%m/%d/%Y at %H UTC\")\n", "tl1 = str(\"ERA-5 SLP and \" + str(pLevel) + \"hPa Z, \" + valTime)\n", "plt.title(tl1,fontsize=16)\n", "\n", "# Height (Z) contour fills\n", "\n", "# Convert from units of geopotential to height in decameters.\n", "Zdam = mpcalc.geopotential_to_height(Z.isel(time=0)).metpy.convert_units('dam')\n", "CF = ax.contourf(lons,lats,Zdam,levels=zLevels,cmap=plt.get_cmap('viridis'), extend='both', transform=proj_data) # We need the transform argument now\n", "cbar = fig.colorbar(CF,shrink=0.5)\n", "cbar.set_label(r'Heights (dam)', size='medium')\n", " \n", "# SLP contour lines\n", "CL = ax.contour(lons,lats,SLP.isel(time=0),slpLevels,linewidths=1.25,colors='chocolate', transform=proj_data)\n", "ax.clabel(CL, inline_spacing=0.2, fontsize=8, fmt='%.0f'); " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There \"seams\" to be a problem close to the dateline. This is because the longitude ends at 179.75 E ... leaving a gap between 179.75 and 180 degrees. Fortunately, Cartopy includes the `add_cyclic_point` method in its `utilities` class to deal with this rather common phenomenon in global gridded datasets." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "res = '110m'\n", "fig = plt.figure(figsize=(18,12))\n", "ax = plt.subplot(1,1,1,projection=ccrs.Orthographic(central_longitude=180))\n", "ax.set_global()\n", "ax.add_feature(cfeature.COASTLINE.with_scale(res))\n", "ax.add_feature(cfeature.STATES.with_scale(res))\n", "\n", "# Add the title\n", "valTime = pd.to_datetime(SLP.time[0].values).strftime(\"%m/%d/%Y at %H UTC\")\n", "tl1 = str(\"ERA-5 SLP and \" + str(pLevel) + \"hPa Z, \" + valTime)\n", "plt.title(tl1,fontsize=16)\n", "\n", "# Convert from units of geopotential to height in decameters.\n", "Zdam = mpcalc.geopotential_to_height(Z.isel(time=0)).metpy.convert_units('dam')\n", "\n", "# add cyclic points to data and longitudes\n", "# latitudes are unchanged in 1-dimension\n", "SLP2d, clons= cutil.add_cyclic_point(SLP.isel(time=0), lons)\n", "z2d, clons = cutil.add_cyclic_point(Zdam, lons)\n", "\n", "# Height (Z) contour fills\n", "\n", "\n", "CF = ax.contourf(clons,lats,z2d,levels=zLevels,cmap=plt.get_cmap('viridis'), extend='both', transform=proj_data) # We need the transform argument now\n", "cbar = fig.colorbar(CF,shrink=0.5)\n", "cbar.set_label(r'Heights (dam)', size='medium')\n", " \n", "# SLP contour lines\n", "CL = ax.contour(clons,lats,SLP2d,slpLevels,linewidths=1.25,colors='chocolate', transform=proj_data)\n", "ax.clabel(CL, inline_spacing=0.2, fontsize=8, fmt='%.0f'); " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The white seam is gone ... now let's go back to the PlateCarree projection, centered once again at its default value of 0 degrees." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
Note: The Science on a Sphere treats titles and colorbars as separate layers. Thus, in this next cell we will not generate nor include them in the figure.
\n", "Additionally, the sphere expects its graphics to have a resolution of 2048x1024.
\n", "Finally, by default, Matplotlib includes a border frame around each Axes. We don't want that included on the sphere-projected graphic either.
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "res = '110m'\n", "dpi = 100\n", "fig = plt.figure(figsize=(2048/dpi, 1024/dpi))\n", "ax = plt.subplot(1,1,1,projection=ccrs.PlateCarree(), frameon=False)\n", "ax.set_global()\n", "ax.add_feature(cfeature.COASTLINE.with_scale(res))\n", "ax.add_feature(cfeature.BORDERS.with_scale(res))\n", "ax.add_feature(cfeature.STATES.with_scale(res))\n", "\n", "# Convert from units of geopotential to height in decameters.\n", "Zdam = mpcalc.geopotential_to_height(Z.isel(time=0)).metpy.convert_units('dam')\n", "\n", "# add cyclic points to data and longitudes\n", "# latitudes are unchanged in 1-dimension\n", "SLP2d, clons= cutil.add_cyclic_point(SLP.isel(time=0), lons)\n", "z2d, clons = cutil.add_cyclic_point(Zdam, lons)\n", "\n", "# Height (Z) contour fills\n", "\n", "# Note we don't need the transform argument since the map/data projections are the same, but we'll leave it in\n", "CF = ax.contourf(clons,lats,z2d,levels=zLevels,cmap=plt.get_cmap('viridis'), extend='both', transform=proj_data) \n", "\n", "# SLP contour lines\n", "CL = ax.contour(clons,lats,SLP2d,slpLevels,linewidths=1.25,colors='chocolate', transform=proj_data)\n", "ax.clabel(CL, inline_spacing=0.2, fontsize=8, fmt='%.0f')\n", "fig.tight_layout(pad=.01)\n", "fig.savefig(\"test1.png\") " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Define a function to create the plot for each hour. \n", "The function accepts the hour and a DataArray as its arguments." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def make_sos_map (i, z):\n", " res = '110m'\n", " dpi = 100\n", " fig = plt.figure(figsize=(2048/dpi, 1024/dpi))\n", " ax = plt.subplot(1,1,1,projection=ccrs.PlateCarree(), frameon=False)\n", " ax.set_global()\n", " ax.add_feature(cfeature.COASTLINE.with_scale(res))\n", " ax.add_feature(cfeature.BORDERS.with_scale(res))\n", " ax.add_feature(cfeature.STATES.with_scale(res))\n", "\n", " # add cyclic points to data and longitudes\n", " # latitudes are unchanged in 1-dimension\n", " SLP2d, clons= cutil.add_cyclic_point(SLP.isel(time=i), lons)\n", " z2d, clons = cutil.add_cyclic_point(z, lons)\n", "\n", " # Height (Z) contour fills\n", "\n", " CF = ax.contourf(clons,lats,z2d,levels=zLevels,cmap=plt.get_cmap('viridis'), extend='both') \n", "\n", " # SLP contour lines\n", " CL = ax.contour(clons,lats,SLP2d,slpLevels,linewidths=1.25,colors='chocolate', transform=proj_data)\n", " ax.clabel(CL, inline_spacing=0.2, fontsize=8, fmt='%.0f')\n", " fig.tight_layout(pad=.01)\n", " frameNum = str(i).zfill(2)\n", " fig.savefig('ERA5_780126' + frameNum + '.png')\n", " plt.close()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Open a handle to a text file that will contain each plot's title." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
When making your own products, modify the title string appropriately, and also modify/eliminate the line that performs unit conversions depending on what scalar/vector you wish to plot.
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fileName = '500Z_SLP.txt' # Modify the filename to fit the context of your plot." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Execute the `for` loop\n", "Loop over the 24 hours and create each hour's graphic and write the title line to the text file. We will also convert the geopotential grid into height in decameters." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
For demonstration purposes, we will use a time increment of 12 hours ... thus only 2 graphics will be produced. When you are ready, change the inc to 1 so all hours are processed.
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fileObject = open(fileName, 'w')\n", "startHr = 0\n", "endHr = 24\n", "inc = 12 #\n", "\n", "for time in range(startHr, endHr, inc):\n", " \n", "# Perform any unit conversions appropriate for your scalar quantities. In this example,\n", "# we convert geopotential to geopotential height in decameters.\n", "\n", " z = mpcalc.geopotential_to_height(Z.isel(time=time)).metpy.convert_units('dam')\n", " make_sos_map(time,z)\n", " # Construct the title string and write it to the file\n", " valTime = pd.to_datetime(times[time].values).strftime(\"%m/%d/%Y %H UTC\")\n", " tl1 = str(\"SLP & \" + str(pLevel) + \" hPa Z \" + valTime + \"\\n\") # \\n is the newline character\n", " fileObject.write(tl1)\n", " print(tl1)\n", "\n", "# Close the text file\n", "fileObject.close()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## Summary\n", "\n", "We now have a means of creating graphics and corresponding title strings that we can upload and ultimately visualize on our Science-on-a-Sphere.\n", "\n", "### What's Next?\n", "\n", "We will create a SoS-friendly color bar for any products that use a contour fill. After that, we will have everything we need for our SoS visualization, and just need to learn how to make it ready for upload to the SoS computer.\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 August 2022 Environment", "language": "python", "name": "aug22" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.5" } }, "nbformat": 4, "nbformat_minor": 4 }