- Mar 29, 2023
-
-
Wilbur, Spencer Franklin authored
-
Wilbur, Spencer Franklin authored
Updated the pyproject.toml, Metadata.py, and Code.json files to reflect most current versions and code found in the Master branch. This was done to resolve a merge conflict.
-
- Mar 17, 2023
-
-
Wilbur, Spencer Franklin authored
-
- Mar 08, 2023
-
-
Wilbur, Spencer Franklin authored
-
A vulnerability in the fastapi dependency starlette required a bump to [fastapi v0.92.0](https://github.com/tiangolo/fastapi/releases/tag/0.92.0) and update to the poetry.lock file.
-
-
-
Wilbur, Spencer Franklin authored
-
Wilbur, Spencer Franklin authored
-
Updated the script to create a new line following the last line of any information pertianing to body summary. Should prevent other files from being concatonated to the end of a previous line.
-
added lines so that the eleventh and following gaps are shown in collapsible list. Added line 186 to create break and new line after all gaps or no gaps are printed
-
-
-
-
-
-
-
-
-
I ran `poetry update`, which fixed several vulnerabilities. HOWEVER, there were a couple that didn't get fixed, including the wheel package. I ran `poetry run pip install -U wheel` to update wheel in my own virtual environmenet instead of adding it to pyproject.toml. This is because I read something that indicated that wheel is a rather special package that either should be, or actually cannot be, handled via pyrpoject.toml. I *believe* that the Gitlab pipeline will create an entirely new virtual environemnt, and that this will not be an issue, but we'll see.
-
- code.json - pyproject.toml
-
I was lazy the first time and didn't subtract delta from starttime when setting the default min_count_end in the configure method. This led to "flakiness" in Dst when long gaps were encountered in input observatories...exactly the kind of issue min_count_end was intended to mitigate. It should work correctly now.
-
The has_any_channels method in TimeseriesUtility.py did not properly handle streams with multiple traces with the same channel, but different stations (or networks, or locations). It should work now.
-
-
-
Writing unit tests for Controller configurations is tricky, so I didn't do it, and this tiny-but-impactful bug slipped in. I did test this on the stagin server this time, and the processing pipe- line should work after this is deployed.
-
The original version of AverageAlgorithm.py seemed to be written to process the Controller's `--outchannels`. It "worked" for several years because `--outchannels` defaulted to `--inchannels`. But what we actually always wanted was to process `--inchannels`. Mostly this was just a matter of semantics, but when I recently added the ability to average fewer than the full complement of inputs, it was also necessary to change the `can_produce_data()` method to check for "any" instead of "all" channels, which in turn required that the AverageAlgorithm class properly instantiate its `_inchannels` and `_outchannels` class variables, instead of just set them to `None`. To briefly explain the different changes in this commit: - added `Algorithm.__init__(self, inchannels=[channel])` to ensure that can_produce_data() would work when run via programmatic interface. - added `Algorithm.configure(self, arguments)` to AverageAlgorithm.configure() to ensure that can_produce_data() would work when run via Controller.py. - changed all variations of `outchannel` to `inchannel` - cleaned up where class variables were modified inside process() method
-
The method TimeseriesUtility.get_stream_gaps() would over-write gaps[channel] if there were multiple Traces in a stream that were the same channel, but different observatories. This is a typical situation when using the AverageAlgorithm.
-
Recent attempts to change the AverageAlgorithm to allow fewer than the full complement of inputs when calculating a mutli-station average were thwarted by the default can_produce_data() method in the parent Algorithm class, which required all inputs to be present, when what we now want is **any** inputs to be present.
-
-
-
-
-
Imported Enum to define a CLI parameter with a predefined set of values to choose from. Created a spreadsheets_dir option and factory option.
-
-
-
Previously use the number of traces in the input Stream. This could be fewer than the number of desired observatories if a given input's data was completely missing.
-
The recently added `min_count` option to AverageAlgorithm.py leads to some undesirable behavior when realtime data, with asynchronous inputs, are being processed. By adding the ability to specify an interval over which `min_count` is applied, some of this undesirable behavior can be mitigated. In particular, if the `realtime` option is specified via the controller, and `min_count` is defined, the minimum number of inputs will be allowed only for time steps prior to `(UTCDateTime.now() - realtime)`; the full complement of inputs will be required to calculate averages more recent than that. One drawback is that if an input observatory goes offline for an extended period, the Dst index will be calculated with a persistent lag `realtime` seconds long. A user can always override this admittedly ad-hoc default behavior using the `min_count_start` and `min_count_end` options.
-
A "feature" in obspy is that the stats.npts metadata object is not automatically calculated if a Trace is created by passing in a Stats object (instead of a simple dictionary). The Trace still contained the data array, which is probably why this issue wasn't discovered sooner, but something as simple as trace.times() is broken if npts is not specified. It's amazing this wasn't discovered sooner, given how many times traces get "created" throughout geomag-algoriothms. Probably, we should survey the codebase to make sure this isn't a problem elsewhere.
-
- added a `min_count` keyword to the AvergeAlgorithm's __init__() method; - added a --average-min-count option via the add_arguments() classmethod; - set `self.min_count = arguments.average_min_count` in configure() method; - refactored process() method to respect `min_count`, but now it defaults to a requirement that all inputs be valid (this modified a recent merge by @awernle that only required that any inputs be valid; - modified AverageAlgorithm_test.py to properly assess things in light of the change to default behavior just mentioned.
-