historical_inputs modules

The module provides tools for accessing historical market data and preprocessing for compatibility with the SpotMarket class.

xml_cache

Classes:

XMLCacheManager(cache_folder)

Class for accessing data stored in AEMO's NEMDE output files.

Exceptions:

MissingDataError

Raise for unable to downloaded data from NEMWeb.

class nempy.historical_inputs.xml_cache.XMLCacheManager(cache_folder)

Class for accessing data stored in AEMO’s NEMDE output files.

Examples

A XMLCacheManager instance is created by providing the path to directory containing the cache of XML files.

>>> manager = XMLCacheManager('test_nemde_cache')
Parameters:

cache_folder (str) –

Methods:

populate(start_year, start_month, end_year, ...)

Download data to the cache from the AEMO website.

populate_by_day(start_year, start_month, ...)

Download data to the cache from the AEMO website.

load_interval(interval)

Load the data for particular 5 min dispatch interval into memory.

interval_inputs_in_cache()

Check if the cache contains the data for the loaded interval, primarily for debugging.

get_file_path()

Get the file path to the currently loaded interval.

get_file_name()

Get the filename of the currently loaded interval.

get_unit_initial_conditions()

Get the initial conditions of units at the start of the dispatch interval.

get_unit_fast_start_parameters()

Get the unit fast start dispatch inflexibility parameter values.

get_unit_volume_bids()

Get the unit volume bids

get_unit_price_bids()

Get the unit volume bids

get_UIGF_values()

Get the unit unconstrained intermittent generation forecast.

get_violations()

Get the total volume violation of different constraint sets.

get_constraint_violation_prices()

Get the price of violating different constraint sets.

is_intervention_period()

Check if the interval currently loaded was subject to an intervention.

get_constraint_rhs()

Get generic constraints rhs values.

get_constraint_type()

Get generic constraints type.

get_constraint_region_lhs()

Get generic constraints lhs term regional coefficients.

get_constraint_unit_lhs()

Get generic constraints lhs term unit coefficients.

get_constraint_interconnector_lhs()

Get generic constraints lhs term interconnector coefficients.

get_market_interconnector_link_bid_availability()

Get the bid availability of market interconnectors.

find_intervals_with_violations(limit, ...)

Find the set of dispatch intervals where the non-intervention dispatch runs had constraint violations.

get_service_prices()

Get the energy market and FCAS prices by region.

populate(start_year, start_month, end_year, end_month, verbose=True)

Download data to the cache from the AEMO website. Data downloaded is inclusive of the start and end month.

populate_by_day(start_year, start_month, end_year, end_month, start_day, end_day, verbose=True)

Download data to the cache from the AEMO website. Data downloaded is inclusive of the start and end date.

load_interval(interval)

Load the data for particular 5 min dispatch interval into memory.

If the file intervals data is not on disk then an attempt to download it from AEMO’s NEMweb portal is made.

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
Parameters:

interval (str) – In the format ‘%Y/%m/%d %H:%M:%S’

Raises:

MissingDataError – If the data for an interval is not in the cache and cannot be downloaded from NEMWeb.

interval_inputs_in_cache()

Check if the cache contains the data for the loaded interval, primarily for debugging.

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.interval_inputs_in_cache()
True
Return type:

bool

get_file_path()

Get the file path to the currently loaded interval.

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_file_path() 
PosixPath('test_nemde_cache/NEMSPDOutputs_2018123124000.loaded')

So the doctest runs on all Operating systems lets also look at the parts of the path.

>>> manager.get_file_path().parts
('test_nemde_cache', 'NEMSPDOutputs_2018123124000.loaded')
get_file_name()

Get the filename of the currently loaded interval.

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_file_name()
'NEMSPDOutputs_2018123124000.loaded'
get_unit_initial_conditions()

Get the initial conditions of units at the start of the dispatch interval.

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_unit_initial_conditions()
         DUID   INITIALMW  RAMPUPRATE  RAMPDOWNRATE  AGCSTATUS
0      AGLHAL    0.000000         NaN           NaN        0.0
1      AGLSOM    0.000000         NaN           NaN        0.0
2     ANGAST1    0.000000         NaN           NaN        0.0
3       APD01    0.000000         NaN           NaN        0.0
4       ARWF1   54.500000         NaN           NaN        0.0
..        ...         ...         ...           ...        ...
283  YARWUN_1  140.360001         NaN           NaN        0.0
284     YWPS1  366.665833  177.750006    177.750006        1.0
285     YWPS2  374.686066  190.125003    190.125003        1.0
286     YWPS3    0.000000  300.374994    300.374994        0.0
287     YWPS4  368.139252  182.249994    182.249994        1.0

[288 rows x 5 columns]
Returns:

Columns:

Description:

DUID

unique identifier of a dispatch unit,

(as str)

INITIALMW

the output or consumption of the unit

at the start of the interval, in MW,

(as np.int64),

RAMPUPRATE

ramp up rate of unit as repoted by the

scada system at the start if the

interval, in MW/h, (as np.int64)

RAMPDOWNRATE

ramp down rate of unit as repoted by the

scada system at the start if the

interval, in MW/h, (as np.int64)

AGCSTATUS

flag to indicate whether the unit is

connected to the AGC system at the

start of the interval, 0.0 if not and

1.0 if it is, (as np.int64)

Return type:

pd.DataFrame

get_unit_fast_start_parameters()

Get the unit fast start dispatch inflexibility parameter values.

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_unit_fast_start_parameters()
        DUID  MinLoadingMW  CurrentMode  CurrentModeTime  T1  T2  T3  T4
0     AGLHAL             2            0                0  10   3  10   2
1     AGLSOM            16            0                0  20   2  35   2
2   BARCALDN            12            0                0  14   4   1   4
3   BARRON-1             5            4                1  11   3   1   1
4   BARRON-2             5            4                1  11   3   1   1
..       ...           ...          ...              ...  ..  ..  ..  ..
69     VPGS5            48            0                0   5   3  15   0
70     VPGS6            48            0                0   5   3  15   0
71   W/HOE#1           160            0                0   3   0   0   0
72   W/HOE#2           160            0                0   3   0   0   0
73    YABULU            83            0                0   5   6  42   6

[74 rows x 8 columns]
Returns:

Columns:

Description:

DUID

unique identifier of a dispatch unit,

(as str)

MinLoadingMW

see AEMO doc,

in MW, (as np.int64)

CurrentMode

The dispatch mode if the unit at the

start of the interval, for mode

definitions see AEMO doc,

(as np.int64)

CurrentModeTime

The time already spent in the current

mode, in minutes, (as np.int64)

T1

The total length of mode 1, in minutes

(as np.int64)

T2

The total length of mode 2, in minutes

(as np.int64)

T3

The total length of mode 1, in minutes,

(as np.int64)

T4

The total length of mode 4, in minutes,

(as np.int64)

Return type:

pd.DataFrame

get_unit_volume_bids()

Get the unit volume bids

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_unit_volume_bids()
         DUID     BIDTYPE  MAXAVAIL  ENABLEMENTMIN  ENABLEMENTMAX  LOWBREAKPOINT  HIGHBREAKPOINT  BANDAVAIL1  BANDAVAIL2  BANDAVAIL3  BANDAVAIL4  BANDAVAIL5  BANDAVAIL6  BANDAVAIL7  BANDAVAIL8  BANDAVAIL9  BANDAVAIL10  RAMPDOWNRATE  RAMPUPRATE
0      AGLHAL      ENERGY     173.0            0.0            0.0            0.0             0.0         0.0         0.0         0.0         0.0         0.0         0.0        60.0         0.0         0.0        160.0         720.0       720.0
1      AGLSOM      ENERGY     160.0            0.0            0.0            0.0             0.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0        170.0         480.0       480.0
2     ANGAST1      ENERGY      43.0            0.0            0.0            0.0             0.0         0.0         0.0         0.0         0.0         0.0        50.0         0.0         0.0         0.0         50.0         840.0       840.0
3       APD01   LOWER5MIN       0.0            0.0            0.0            0.0             0.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0        300.0           0.0         0.0
4       APD01  LOWER60SEC       0.0            0.0            0.0            0.0             0.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0        300.0           0.0         0.0
...       ...         ...       ...            ...            ...            ...             ...         ...         ...         ...         ...         ...         ...         ...         ...         ...          ...           ...         ...
1021    YWPS4   LOWER6SEC      25.0          250.0          385.0          275.0           385.0        15.0        10.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0          0.0           0.0         0.0
1022    YWPS4   RAISE5MIN       0.0          250.0          390.0          250.0           380.0         0.0         0.0         0.0         0.0         5.0         0.0         0.0         5.0         0.0         10.0           0.0         0.0
1023    YWPS4    RAISEREG      15.0          250.0          385.0          250.0           370.0         0.0         0.0         0.0         0.0         0.0         0.0         5.0        10.0         0.0          5.0           0.0         0.0
1024    YWPS4  RAISE60SEC      10.0          220.0          400.0          220.0           390.0         0.0         0.0         0.0         0.0         0.0         5.0         5.0         0.0         0.0         10.0           0.0         0.0
1025    YWPS4   RAISE6SEC      15.0          220.0          405.0          220.0           390.0         0.0         0.0         0.0        10.0         5.0         0.0         0.0         0.0         0.0         10.0           0.0         0.0

[1026 rows x 19 columns]
Returns:

Columns:

Description:

DUID

unique identifier of a dispatch unit,

(as str)

BIDTYPE

the service the bid applies to,

(as str)

MAXAVAIL

the bid in unit availablity, in MW, (as str)

ENABLEMENTMIN

see AMEO docs,

in MW, (as np.float64)

ENABLEMENTMAX

see AMEO docs,

in MW, (as np.float64)

LOWBREAKPOINT

see AMEO docs,

in MW, (as np.float64)

HIGHBREAKPOINT

see AMEO docs,

in MW, (as np.float64)

BANDAVAIL1

the volume bid in the first bid band, in MW, (as np.float64)

:

BANDAVAIL10

the volume bid in the tenth bid band, in MW, (as np.float64)

RAMPDOWNRATE

the bid in ramp down rate, in MW/h, (as np.int64)

RAMPUPRATE

the bid in ramp up rate, in MW/h, (as np.int64)

Return type:

pd.DataFrame

get_unit_price_bids()

Get the unit volume bids

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_unit_price_bids()
         DUID     BIDTYPE  PRICEBAND1  PRICEBAND2  PRICEBAND3  PRICEBAND4  PRICEBAND5  PRICEBAND6  PRICEBAND7  PRICEBAND8  PRICEBAND9  PRICEBAND10
0      AGLHAL      ENERGY    -1000.00        0.00      278.81      368.81      418.81      498.81      578.81     1365.56    10578.87     13998.99
1      AGLSOM      ENERGY    -1000.00        0.00       85.00      110.00      145.00      284.00      451.00     1001.00    13300.87     14499.96
2     ANGAST1      ENERGY    -1000.00        0.00      125.00      200.20      299.19      379.98      589.99     1374.85    10618.00     14500.00
3       APD01   LOWER5MIN        0.00        1.00        2.00        3.00        4.00        5.00        6.00        7.00        8.00         9.00
4       APD01  LOWER60SEC        0.00        1.00        2.00        3.00        4.00        5.00        6.00        7.00        8.00         9.00
...       ...         ...         ...         ...         ...         ...         ...         ...         ...         ...         ...          ...
1021    YWPS4   LOWER6SEC        0.03        0.05        0.16        0.30        1.90       25.04       30.04       99.00     4600.00      9899.00
1022    YWPS4   RAISE5MIN        0.05        1.78        4.48       14.50       30.03       49.00       87.70      100.00    11990.00     12400.40
1023    YWPS4    RAISEREG        0.05        2.70        9.99       19.99       49.00       95.50      240.00      450.50      950.50     11900.00
1024    YWPS4  RAISE60SEC        0.17        1.80        4.80       10.01       21.00       39.00       52.00      102.00     4400.00     11999.00
1025    YWPS4   RAISE6SEC        0.48        1.75        4.90       20.70       33.33       99.90      630.00     1999.00     6000.00     12299.00

[1026 rows x 12 columns]
Returns:

Columns:

Description:

DUID

unique identifier of a dispatch unit,

(as str)

BIDTYPE

the service the bid applies to,

(as str)

PRICEBAND1

the volume bid in the first bid band, in MW, (as np.float64)

:

PRICEBAND10

the volume bid in the tenth bid band, in MW, (as np.float64)

Return type:

pd.DataFrame

get_UIGF_values()

Get the unit unconstrained intermittent generation forecast.

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_UIGF_values()
        DUID    UIGF
0      ARWF1  56.755
1   BALDHWF1   9.160
2      BANN1   0.000
3     BLUFF1   4.833
4     BNGSF1   0.000
..       ...     ...
57     WGWF1  25.445
58   WHITSF1   0.000
59  WOODLWN1   0.075
60     WRSF1   0.000
61     WRWF1  15.760

[62 rows x 2 columns]
Returns:

Columns:

Description:

DUID

unique identifier of a dispatch unit,

(as str)

UGIF

the units generation forecast for end

of the inteval, in MW, (as np.float64)

Return type:

pd.DataFrame

get_violations()

Get the total volume violation of different constraint sets.

For more information on the constraint sets see AMEO docs

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_violations()
{'regional_demand': 0.0, 'interocnnector': 0.0, 'generic_constraint': 0.0, 'ramp_rate': 0.0, 'unit_capacity': 0.36, 'energy_constraint': 0.0, 'energy_offer': 0.0, 'fcas_profile': 0.0, 'fast_start': 0.0, 'mnsp_ramp_rate': 0.0, 'msnp_offer': 0.0, 'mnsp_capacity': 0.0, 'ugif': 0.0}
Return type:

dict

get_constraint_violation_prices()

Get the price of violating different constraint sets.

For more information on the constraint sets see AMEO docs

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_constraint_violation_prices()
{'regional_demand': 2175000.0, 'interocnnector': 16675000.0, 'generic_constraint': 435000.0, 'ramp_rate': 16747500.0, 'unit_capacity': 5365000.0, 'energy_offer': 16457500.0, 'fcas_profile': 2247500.0, 'fcas_max_avail': 2247500.0, 'fcas_enablement_min': 1015000.0, 'fcas_enablement_max': 1015000.0, 'fast_start': 16385000.0, 'mnsp_ramp_rate': 16747500.0, 'msnp_offer': 16457500.0, 'mnsp_capacity': 5292500.0, 'uigf': 5582500.0, 'voll': 14500.0, 'tiebreak': 1e-06}
Return type:

dict

is_intervention_period()

Check if the interval currently loaded was subject to an intervention.

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.is_intervention_period()
False
Return type:

bool

get_constraint_rhs()

Get generic constraints rhs values.

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_constraint_rhs()
                     set           rhs
0               #BANN1_E     32.000000
1              #BNGSF2_E      3.000000
2            #CROWLWF1_E     43.000000
3             #CSPVPS1_E     29.000000
4             #DAYDSF1_E      0.000000
..                   ...           ...
704          V_OWF_NRB_0  10000.001000
705  V_OWF_TGTSNRBHTN_30  10030.000000
706        V_S_NIL_ROCOF    812.280029
707          V_T_NIL_BL1    478.000000
708        V_T_NIL_FCSPS    425.154024

[709 rows x 2 columns]
Returns:

Columns:

Description:

set

the unique identifier of the generic

constraint, (as str)

rhs

the rhs value of the constraint,

(as np.float64)

Return type:

pd.DataFrame

get_constraint_type()

Get generic constraints type.

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_constraint_type()
                     set type       cost
0               #BANN1_E   LE  5220000.0
1              #BNGSF2_E   LE  5220000.0
2            #CROWLWF1_E   LE  5220000.0
3             #CSPVPS1_E   LE  5220000.0
4             #DAYDSF1_E   LE  5220000.0
..                   ...  ...        ...
704          V_OWF_NRB_0   LE  5220000.0
705  V_OWF_TGTSNRBHTN_30   LE  5220000.0
706        V_S_NIL_ROCOF   LE   507500.0
707          V_T_NIL_BL1   LE  5220000.0
708        V_T_NIL_FCSPS   LE   435000.0

[709 rows x 3 columns]
Returns:

Columns:

Description:

set

the unique identifier of the generic

constraint, (as str)

type

the type of constraint, i.e ‘=’, ‘<=’ or

’<=’, (as str)

cost

the cost of violating the constraint,

(as np.float64)

Return type:

pd.DataFrame

get_constraint_region_lhs()

Get generic constraints lhs term regional coefficients.

This is a compact way of describing constraints that apply to all units in a region. If a constraint set appears here and also in the unit specific lhs table then the coefficents used in the constraint is the sum of the two coefficients, this can be used to exclude particular units from otherwise region wide constraints.

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_constraint_region_lhs()
                   set region service  coefficient
0        F_I+LREG_0120   NSW1    L5RE          1.0
1        F_I+LREG_0120   QLD1    L5RE          1.0
2        F_I+LREG_0120    SA1    L5RE          1.0
3        F_I+LREG_0120   TAS1    L5RE          1.0
4        F_I+LREG_0120   VIC1    L5RE          1.0
..                 ...    ...     ...          ...
478   F_T+NIL_WF_TG_R5   TAS1    R5RE          1.0
479   F_T+NIL_WF_TG_R6   TAS1    R6SE          1.0
480  F_T+NIL_WF_TG_R60   TAS1    R60S          1.0
481      F_T+RREG_0050   TAS1    R5RE          1.0
482    F_T_NIL_MINP_R6   TAS1    R6SE          1.0

[483 rows x 4 columns]
Returns:

Columns:

Description:

set

the unique identifier of the generic

constraint, (as str)

region

the regions the constraint applies in,

(as str)

service

the services the constraint applies too,

(as str)

coefficient

the coefficient of the terms on the lhs,

(as np.float64)

Return type:

pd.DataFrame

get_constraint_unit_lhs()

Get generic constraints lhs term unit coefficients.

If a constraint set appears here and also in the region lhs table then the coefficents used in the constraint is the sum of the two coefficients, this can be used to exclude particular units from otherwise region wide constraints.

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_constraint_unit_lhs()
                      set      unit service  coefficient
0                #BANN1_E     BANN1    ENOF          1.0
1               #BNGSF2_E    BNGSF2    ENOF          1.0
2             #CROWLWF1_E  CROWLWF1    ENOF          1.0
3              #CSPVPS1_E   CSPVPS1    ENOF          1.0
4              #DAYDSF1_E   DAYDSF1    ENOF          1.0
...                   ...       ...     ...          ...
5864      V_ARWF_FSTTRP_5     ARWF1    ENOF          1.0
5865      V_MTGBRAND_33WT  MTGELWF1    ENOF          1.0
5866     V_OAKHILL_TFB_42  OAKLAND1    ENOF          1.0
5867          V_OWF_NRB_0  OAKLAND1    ENOF          1.0
5868  V_OWF_TGTSNRBHTN_30  OAKLAND1    ENOF          1.0

[5869 rows x 4 columns]
Returns:

Columns:

Description:

set

the unique identifier of the generic

constraint, (as str)

unit

the units the constraint applies in,

(as str)

service

the services the constraint applies too,

(as str)

coefficient

the coefficient of the terms on the lhs,

(as np.float64)

Return type:

pd.DataFrame

get_constraint_interconnector_lhs()

Get generic constraints lhs term interconnector coefficients.

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_constraint_interconnector_lhs()
                     set interconnector  coefficient
0               DATASNAP      N-Q-MNSP1          1.0
1        DATASNAP_DFS_LS      N-Q-MNSP1          1.0
2      DATASNAP_DFS_NCAN      N-Q-MNSP1          1.0
3    DATASNAP_DFS_NCWEST      N-Q-MNSP1          1.0
4      DATASNAP_DFS_NNTH      N-Q-MNSP1          1.0
..                   ...            ...          ...
631      V^^S_NIL_TBSE_1           V-SA          1.0
632      V^^S_NIL_TBSE_2           V-SA          1.0
633        V_S_NIL_ROCOF           V-SA          1.0
634          V_T_NIL_BL1      T-V-MNSP1         -1.0
635        V_T_NIL_FCSPS      T-V-MNSP1         -1.0

[636 rows x 3 columns]
Returns:

Columns:

Description:

set

the unique identifier of the generic

constraint, (as str)

interconnector

the interconnector the constraint applies in,

(as str)

coefficient

the coefficient of the terms on the lhs,

(as np.float64)

Return type:

pd.DataFrame

Get the bid availability of market interconnectors.

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_market_interconnector_link_bid_availability()
  interconnector to_region  availability
0      T-V-MNSP1      TAS1         478.0
1      T-V-MNSP1      VIC1         478.0
Returns:

Columns:

Description:

interconnector

the interconnector the constraint applies in,

(as str)

to_region

the direction the bid availability applies to,

(as str)

availability

the availability as bid in by the

interconnector, (as str)

Return type:

pd.DataFrame

find_intervals_with_violations(limit, start_year, start_month, end_year, end_month)

Find the set of dispatch intervals where the non-intervention dispatch runs had constraint violations.

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.find_intervals_with_violations(limit=3, start_year=2019, start_month=1, end_year=2019, end_month=1)
{'2019/01/01 00:00:00': ['unit_capacity'], '2019/01/01 00:05:00': ['unit_capacity'], '2019/01/01 00:10:00': ['unit_capacity']}
Parameters:
  • limit (int) – number of intervals to find, finds first intervals in chronolgical order

  • start_year (int) – year to start search

  • start_month (int) – month to start search

  • end_year (int) – year to end search

  • end_month (int) – month to end search

Return type:

dict

get_service_prices()

Get the energy market and FCAS prices by region.

Examples

>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_service_prices()
   region     service     price
0    NSW1      ENERGY  62.93553
1    NSW1   RAISE5MIN      4.39
2    NSW1  RAISE60SEC         1
3    NSW1  LOWER60SEC      0.07
4    NSW1   RAISE6SEC         1
5    NSW1   LOWER6SEC      0.03
6    QLD1      ENERGY  58.71004
7    QLD1   RAISE5MIN      4.39
8    QLD1  RAISE60SEC         1
9    QLD1  LOWER60SEC      0.07
10   QLD1   RAISE6SEC         1
11   QLD1   LOWER6SEC      0.03
12    SA1      ENERGY   79.0014
13    SA1   RAISE5MIN      4.39
14    SA1  RAISE60SEC         1
15    SA1  LOWER60SEC      0.07
16    SA1   RAISE6SEC         1
17    SA1   LOWER6SEC      0.03
18   TAS1      ENERGY  79.00957
19   TAS1   RAISE5MIN      14.4
20   TAS1  RAISE60SEC      4.95
21   TAS1  LOWER60SEC      0.07
22   TAS1   RAISE6SEC      4.95
23   TAS1   LOWER6SEC      0.03
24   VIC1      ENERGY  75.23031
25   VIC1   RAISE5MIN      4.39
26   VIC1  RAISE60SEC         1
27   VIC1  LOWER60SEC      0.07
28   VIC1   RAISE6SEC         1
29   VIC1   LOWER6SEC      0.03
Returns:

Columns:

Description:

region

the region (as str)

service

the services (as str), i.e. energy,

lower_1s, lower_5min, etc

price

the price of the service (as np.float64)

Return type:

pd.DataFrame

exception nempy.historical_inputs.xml_cache.MissingDataError

Raise for unable to downloaded data from NEMWeb.

with_traceback()

Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.

mms_db

Classes:

DBManager(connection)

Constructs and manages a sqlite database for accessing historical inputs for NEM spot market dispatch.

InputsBySettlementDate(table_name, ...)

Manages retrieving dispatch inputs by SETTLEMENTDATE.

InputsByIntervalDateTime(table_name, ...)

Manages retrieving dispatch inputs by INTERVAL_DATETIME.

InputsByDay(table_name, table_columns, ...)

Manages retrieving dispatch inputs by SETTLEMENTDATE, where inputs are stored on a daily basis.

InputsStartAndEnd(table_name, table_columns, ...)

Manages retrieving dispatch inputs by START_DATE and END_DATE.

InputsByMatchDispatchConstraints(table_name, ...)

Manages retrieving dispatch inputs by matching against the DISPATCHCONSTRAINTS table

InputsByEffectiveDateVersionNoAndDispatchInterconnector(...)

Manages retrieving dispatch inputs by EFFECTTIVEDATE and VERSIONNO.

InputsByEffectiveDateVersionNo(table_name, ...)

Manages retrieving dispatch inputs by EFFECTTIVEDATE and VERSIONNO.

InputsNoFilter(table_name, table_columns, ...)

Manages retrieving dispatch inputs where no filter is require.

class nempy.historical_inputs.mms_db.DBManager(connection)

Constructs and manages a sqlite database for accessing historical inputs for NEM spot market dispatch.

Constructs a database if none exists, otherwise connects to an existing database. Specific datasets can be added to the database from AEMO nemweb portal and inputs can be retrieved on a 5 min dispatch interval basis.

Examples

Create the database or connect to an existing one.

>>> import sqlite3
>>> con = sqlite3.connect('historical.db')

Create the database manager.

>>> historical = DBManager(con)

Create a set of default table in the database.

>>> historical.create_tables()

Add data from AEMO nemweb data portal. In this case we are adding data from the table DISPATCHREGIONSUM which contains a dispatch summary by region, the data comes in monthly chunks.

>>> historical.DISPATCHREGIONSUM.add_data(year=2020, month=1)
>>> historical.DISPATCHREGIONSUM.add_data(year=2020, month=2)

This table has an add_data method indicating that data provided by AEMO comes in monthly files that do not overlap. If you need data for multiple months then multiple add_data calls can be made.

Data for a specific 5 min dispatch interval can then be retrieved.

>>> print(historical.DISPATCHREGIONSUM.get_data('2020/01/10 12:35:00').head())
        SETTLEMENTDATE REGIONID  TOTALDEMAND  DEMANDFORECAST  INITIALSUPPLY
0  2020/01/10 12:35:00     NSW1      9938.01        34.23926     9902.79199
1  2020/01/10 12:35:00     QLD1      6918.63        26.47852     6899.76270
2  2020/01/10 12:35:00      SA1      1568.04         4.79657     1567.85864
3  2020/01/10 12:35:00     TAS1      1124.05        -3.43994     1109.36963
4  2020/01/10 12:35:00     VIC1      6633.45        37.05273     6570.15527

Some tables will have a set_data method instead of an add_data method, indicating that the most recent data file provided by AEMO contains all historical data for this table. In this case if multiple calls to the set_data method are made the new data replaces the old.

>>> historical.DUDETAILSUMMARY.set_data(year=2020, month=2)

Data for a specific 5 min dispatch interval can then be retrieved.

>>> print(historical.DUDETAILSUMMARY.get_data('2020/01/10 12:35:00').head())
       DUID           START_DATE             END_DATE DISPATCHTYPE CONNECTIONPOINTID REGIONID  TRANSMISSIONLOSSFACTOR  DISTRIBUTIONLOSSFACTOR  SCHEDULE_TYPE
0    AGLHAL  2019/07/01 00:00:00  2020/01/20 00:00:00    GENERATOR             SHPS1      SA1                  0.9748                  1.0000      SCHEDULED
1   AGLNOW1  2019/07/01 00:00:00  2999/12/31 00:00:00    GENERATOR             NDT12     NSW1                  0.9929                  1.0000  NON-SCHEDULED
2  AGLSITA1  2019/07/01 00:00:00  2999/12/31 00:00:00    GENERATOR            NLP13K     NSW1                  1.0009                  1.0000  NON-SCHEDULED
3    AGLSOM  2019/07/01 00:00:00  2999/12/31 00:00:00    GENERATOR             VTTS1     VIC1                  0.9915                  0.9891      SCHEDULED
4   ANGAST1  2019/07/01 00:00:00  2999/12/31 00:00:00    GENERATOR             SDRN1      SA1                  0.9517                  0.9890      SCHEDULED
Parameters:

con (sqlite3.connection) –

BIDPEROFFER_D

Unit volume bids by 5 min dispatch intervals.

Type:

InputsByIntervalDateTime

BIDDAYOFFER_D

Unit price bids by market day.

Type:

InputsByDay

DISPATCHREGIONSUM

Regional demand terms by 5 min dispatch intervals.

Type:

InputsBySettlementDate

DISPATCHLOAD

Unit operating conditions by 5 min dispatch intervals.

Type:

InputsBySettlementDate

DUDETAILSUMMARY

Unit information by the start and end times of when the information is applicable.

Type:

InputsStartAndEnd

DISPATCHCONSTRAINT

The generic constraints that were used in each 5 min interval dispatch.

Type:

InputsBySettlementDate

GENCONDATA

The generic constraints information, their applicability to a particular dispatch interval is determined by reference to DISPATCHCONSTRAINT.

Type:

InputsByMatchDispatchConstraints

SPDREGIONCONSTRAINT

The regional lhs terms in generic constraints, their applicability to a particular dispatch interval is determined by reference to DISPATCHCONSTRAINT.

Type:

InputsByMatchDispatchConstraints

SPDCONNECTIONPOINTCONSTRAINT

The connection point lhs terms in generic constraints, their applicability to a particular dispatch interval is determined by reference to DISPATCHCONSTRAINT.

Type:

InputsByMatchDispatchConstraints

SPDINTERCONNECTORCONSTRAINT

The interconnector lhs terms in generic constraints, their applicability to a particular dispatch interval is determined by reference to DISPATCHCONSTRAINT.

Type:

InputsByMatchDispatchConstraints

INTERCONNECTOR

The the regions that each interconnector links.

Type:

InputsNoFilter

INTERCONNECTORCONSTRAINT

Interconnector properties FROMREGIONLOSSSHARE, LOSSCONSTANT, LOSSFLOWCOEFFICIENT, MAXMWIN, MAXMWOUT by EFFECTIVEDATE and VERSIONNO.

Type:

InputsByEffectiveDateVersionNoAndDispatchInterconnector

LOSSMODEL

Break points used in linearly interpolating interconnector loss funtctions by EFFECTIVEDATE and VERSIONNO.

Type:

InputsByEffectiveDateVersionNoAndDispatchInterconnector

LOSSFACTORMODEL

Coefficients of demand terms in interconnector loss functions.

Type:

InputsByEffectiveDateVersionNoAndDispatchInterconnector

DISPATCHINTERCONNECTORRES

Record of which interconnector were used in a particular dispatch interval.

Type:

InputsBySettlementDate

Methods:

create_tables()

Drops any existing default tables and creates new ones, this method is generally called a new database.

create_tables()

Drops any existing default tables and creates new ones, this method is generally called a new database.

Examples

Create the database or connect to an existing one.

>>> import sqlite3
>>> con = sqlite3.connect('historical.db')

Create the database manager.

>>> historical = DBManager(con)

Create a set of default table in the database.

>>> historical.create_tables()

Default tables will now exist, but will be empty.

>>> print(pd.read_sql("Select * from DISPATCHREGIONSUM", con=con))
Empty DataFrame
Columns: [SETTLEMENTDATE, REGIONID, TOTALDEMAND, DEMANDFORECAST, INITIALSUPPLY]
Index: []

If you added data and then call create_tables again then any added data will be emptied.

>>> historical.DISPATCHREGIONSUM.add_data(year=2020, month=1)
>>> print(pd.read_sql("Select * from DISPATCHREGIONSUM limit 3", con=con))
        SETTLEMENTDATE REGIONID  TOTALDEMAND  DEMANDFORECAST  INITIALSUPPLY
0  2020/01/01 00:05:00     NSW1      7245.31       -26.35352     7284.32178
1  2020/01/01 00:05:00     QLD1      6095.75       -24.29639     6129.36279
2  2020/01/01 00:05:00      SA1      1466.53         1.47190     1452.25647
>>> historical.create_tables()
>>> print(pd.read_sql("Select * from DISPATCHREGIONSUM", con=con))
Empty DataFrame
Columns: [SETTLEMENTDATE, REGIONID, TOTALDEMAND, DEMANDFORECAST, INITIALSUPPLY]
Index: []
Return type:

None

class nempy.historical_inputs.mms_db.InputsBySettlementDate(table_name, table_columns, table_primary_keys, con)

Manages retrieving dispatch inputs by SETTLEMENTDATE.

Methods:

get_data(date_time)

Retrieves data for the specified date_time e.g.

add_data(year, month)

"Download data for the given table and time, appends to any existing data.

create_table_in_sqlite_db()

Creates a table in the sqlite database that the object has a connection to.

get_data(date_time)

Retrieves data for the specified date_time e.g. 2019/01/01 11:55:00”

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = InputsBySettlementDate(table_name='EXAMPLE', table_columns=['SETTLEMENTDATE', 'INITIALMW'],
...                                table_primary_keys=['SETTLEMENTDATE'], con=con)

Create the table in the database.

>>> table.create_table_in_sqlite_db()

Normally you would use the add_data method to add historical data, but here we will add data directly to the database so some simple example data can be added.

>>> data = pd.DataFrame({
...   'SETTLEMENTDATE': ['2019/01/01 11:55:00', '2019/01/01 12:00:00'],
...   'INITIALMW': [1.0, 2.0]})
>>> _ = data.to_sql('EXAMPLE', con=con, if_exists='append', index=False)

When we call get_data the output is filtered by SETTLEMENTDATE.

>>> print(table.get_data(date_time='2019/01/01 12:00:00'))
        SETTLEMENTDATE  INITIALMW
0  2019/01/01 12:00:00        2.0

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
Parameters:

date_time (str) – Should be of format ‘%Y/%m/%d %H:%M:%S’, and always a round 5 min interval e.g. 2019/01/01 11:55:00.

Return type:

pd.DataFrame

add_data(year, month)

“Download data for the given table and time, appends to any existing data.

Note

This method and its documentation is inherited from the _MultiDataSource class.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = _MultiDataSource(table_name='DISPATCHREGIONSUM',
...   table_columns=['SETTLEMENTDATE', 'REGIONID', 'TOTALDEMAND',
...                  'DEMANDFORECAST', 'INITIALSUPPLY'],
...   table_primary_keys=['SETTLEMENTDATE', 'REGIONID'], con=con)

Create the table in the database.

>>> table.create_table_in_sqlite_db()

Downloading data from http://nemweb.com.au/#mms-data-model into the table.

>>> table.add_data(year=2020, month=1)

Now the database should contain data for this table that is up to date as the end of Janurary.

>>> query = "Select * from DISPATCHREGIONSUM order by SETTLEMENTDATE DESC limit 1;"
>>> print(pd.read_sql_query(query, con=con))
        SETTLEMENTDATE REGIONID  TOTALDEMAND  DEMANDFORECAST  INITIALSUPPLY
0  2020/02/01 00:00:00     VIC1       5935.1        -15.9751     5961.77002

If we subsequently add data from an earlier month the old data remains in the table, in addition to the new data.

>>> table.add_data(year=2019, month=1)
>>> print(pd.read_sql_query(query, con=con))
        SETTLEMENTDATE REGIONID  TOTALDEMAND  DEMANDFORECAST  INITIALSUPPLY
0  2020/02/01 00:00:00     VIC1       5935.1        -15.9751     5961.77002

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
Parameters:
  • year (int) – The year to download data for.

  • month (int) – The month to download data for.

Return type:

None

create_table_in_sqlite_db()

Creates a table in the sqlite database that the object has a connection to.

Note

This method and its documentation is inherited from the _MMSTable class.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = _MMSTable(table_name='EXAMPLE', table_columns=['DUID', 'BIDTYPE'], table_primary_keys=['DUID'],
...                  con=con)

Create the corresponding table in the sqlite database, note this step many not be needed if you have connected to an existing database.

>>> table.create_table_in_sqlite_db()

Now a table exists in the database, but its empty.

>>> print(pd.read_sql("Select * from example", con=con))
Empty DataFrame
Columns: [DUID, BIDTYPE]
Index: []

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
class nempy.historical_inputs.mms_db.InputsByIntervalDateTime(table_name, table_columns, table_primary_keys, con)

Manages retrieving dispatch inputs by INTERVAL_DATETIME.

Methods:

get_data(date_time)

Retrieves data for the specified date_time e.g.

add_data(year, month)

"Download data for the given table and time, appends to any existing data.

create_table_in_sqlite_db()

Creates a table in the sqlite database that the object has a connection to.

get_data(date_time)

Retrieves data for the specified date_time e.g. 2019/01/01 11:55:00”

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = InputsByIntervalDateTime(table_name='EXAMPLE', table_columns=['INTERVAL_DATETIME', 'INITIALMW'],
...                                  table_primary_keys=['INTERVAL_DATETIME'], con=con)

Create the table in the database.

>>> table.create_table_in_sqlite_db()

Normally you would use the add_data method to add historical data, but here we will add data directly to the database so some simple example data can be added.

>>> data = pd.DataFrame({
...   'INTERVAL_DATETIME': ['2019/01/01 11:55:00', '2019/01/01 12:00:00'],
...   'INITIALMW': [1.0, 2.0]})
>>> _ = data.to_sql('EXAMPLE', con=con, if_exists='append', index=False)

When we call get_data the output is filtered by INTERVAL_DATETIME.

>>> print(table.get_data(date_time='2019/01/01 12:00:00'))
     INTERVAL_DATETIME  INITIALMW
0  2019/01/01 12:00:00        2.0

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
Parameters:

date_time (str) – Should be of format ‘%Y/%m/%d %H:%M:%S’, and always a round 5 min interval e.g. 2019/01/01 11:55:00.

Return type:

pd.DataFrame

add_data(year, month)

“Download data for the given table and time, appends to any existing data.

Note

This method and its documentation is inherited from the _MultiDataSource class.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = _MultiDataSource(table_name='DISPATCHREGIONSUM',
...   table_columns=['SETTLEMENTDATE', 'REGIONID', 'TOTALDEMAND',
...                  'DEMANDFORECAST', 'INITIALSUPPLY'],
...   table_primary_keys=['SETTLEMENTDATE', 'REGIONID'], con=con)

Create the table in the database.

>>> table.create_table_in_sqlite_db()

Downloading data from http://nemweb.com.au/#mms-data-model into the table.

>>> table.add_data(year=2020, month=1)

Now the database should contain data for this table that is up to date as the end of Janurary.

>>> query = "Select * from DISPATCHREGIONSUM order by SETTLEMENTDATE DESC limit 1;"
>>> print(pd.read_sql_query(query, con=con))
        SETTLEMENTDATE REGIONID  TOTALDEMAND  DEMANDFORECAST  INITIALSUPPLY
0  2020/02/01 00:00:00     VIC1       5935.1        -15.9751     5961.77002

If we subsequently add data from an earlier month the old data remains in the table, in addition to the new data.

>>> table.add_data(year=2019, month=1)
>>> print(pd.read_sql_query(query, con=con))
        SETTLEMENTDATE REGIONID  TOTALDEMAND  DEMANDFORECAST  INITIALSUPPLY
0  2020/02/01 00:00:00     VIC1       5935.1        -15.9751     5961.77002

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
Parameters:
  • year (int) – The year to download data for.

  • month (int) – The month to download data for.

Return type:

None

create_table_in_sqlite_db()

Creates a table in the sqlite database that the object has a connection to.

Note

This method and its documentation is inherited from the _MMSTable class.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = _MMSTable(table_name='EXAMPLE', table_columns=['DUID', 'BIDTYPE'], table_primary_keys=['DUID'],
...                  con=con)

Create the corresponding table in the sqlite database, note this step many not be needed if you have connected to an existing database.

>>> table.create_table_in_sqlite_db()

Now a table exists in the database, but its empty.

>>> print(pd.read_sql("Select * from example", con=con))
Empty DataFrame
Columns: [DUID, BIDTYPE]
Index: []

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
class nempy.historical_inputs.mms_db.InputsByDay(table_name, table_columns, table_primary_keys, con)

Manages retrieving dispatch inputs by SETTLEMENTDATE, where inputs are stored on a daily basis.

Methods:

get_data(date_time)

Retrieves data for the specified date_time e.g.

add_data(year, month)

"Download data for the given table and time, appends to any existing data.

create_table_in_sqlite_db()

Creates a table in the sqlite database that the object has a connection to.

get_data(date_time)

Retrieves data for the specified date_time e.g. 2019/01/01 11:55:00, where inputs are stored on daily basis.

Note that a market day begins with the first 5 min interval as 04:05:00, there for if and input date_time of 2019/01/01 04:05:00 is given inputs where the SETTLEMENDATE is 2019/01/01 00:00:00 will be retrieved and if a date_time of 2019/01/01 04:00:00 or earlier is given then inputs where the SETTLEMENDATE is 2018/12/31 00:00:00 will be retrieved.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = InputsByDay(table_name='EXAMPLE', table_columns=['SETTLEMENTDATE', 'INITIALMW'],
...                     table_primary_keys=['SETTLEMENTDATE'], con=con)

Create the table in the database.

>>> table.create_table_in_sqlite_db()

Normally you would use the add_data method to add historical data, but here we will add data directly to the database so some simple example data can be added.

>>> data = pd.DataFrame({
...   'SETTLEMENTDATE': ['2019/01/01 00:00:00', '2019/01/02 00:00:00'],
...   'INITIALMW': [1.0, 2.0]})
>>> _ = data.to_sql('EXAMPLE', con=con, if_exists='append', index=False)

When we call get_data the output is filtered by SETTLEMENTDATE and the results from the appropriate market day starting at 04:05:00 are retrieved. In the results below note when the output changes

>>> print(table.get_data(date_time='2019/01/01 12:00:00'))
        SETTLEMENTDATE  INITIALMW
0  2019/01/01 00:00:00        1.0
>>> print(table.get_data(date_time='2019/01/02 04:00:00'))
        SETTLEMENTDATE  INITIALMW
0  2019/01/01 00:00:00        1.0
>>> print(table.get_data(date_time='2019/01/02 04:05:00'))
        SETTLEMENTDATE  INITIALMW
0  2019/01/02 00:00:00        2.0
>>> print(table.get_data(date_time='2019/01/02 12:00:00'))
        SETTLEMENTDATE  INITIALMW
0  2019/01/02 00:00:00        2.0

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
Parameters:

date_time (str) – Should be of format ‘%Y/%m/%d %H:%M:%S’, and always a round 5 min interval e.g. 2019/01/01 11:55:00.

Return type:

pd.DataFrame

add_data(year, month)

“Download data for the given table and time, appends to any existing data.

Note

This method and its documentation is inherited from the _MultiDataSource class.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = _MultiDataSource(table_name='DISPATCHREGIONSUM',
...   table_columns=['SETTLEMENTDATE', 'REGIONID', 'TOTALDEMAND',
...                  'DEMANDFORECAST', 'INITIALSUPPLY'],
...   table_primary_keys=['SETTLEMENTDATE', 'REGIONID'], con=con)

Create the table in the database.

>>> table.create_table_in_sqlite_db()

Downloading data from http://nemweb.com.au/#mms-data-model into the table.

>>> table.add_data(year=2020, month=1)

Now the database should contain data for this table that is up to date as the end of Janurary.

>>> query = "Select * from DISPATCHREGIONSUM order by SETTLEMENTDATE DESC limit 1;"
>>> print(pd.read_sql_query(query, con=con))
        SETTLEMENTDATE REGIONID  TOTALDEMAND  DEMANDFORECAST  INITIALSUPPLY
0  2020/02/01 00:00:00     VIC1       5935.1        -15.9751     5961.77002

If we subsequently add data from an earlier month the old data remains in the table, in addition to the new data.

>>> table.add_data(year=2019, month=1)
>>> print(pd.read_sql_query(query, con=con))
        SETTLEMENTDATE REGIONID  TOTALDEMAND  DEMANDFORECAST  INITIALSUPPLY
0  2020/02/01 00:00:00     VIC1       5935.1        -15.9751     5961.77002

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
Parameters:
  • year (int) – The year to download data for.

  • month (int) – The month to download data for.

Return type:

None

create_table_in_sqlite_db()

Creates a table in the sqlite database that the object has a connection to.

Note

This method and its documentation is inherited from the _MMSTable class.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = _MMSTable(table_name='EXAMPLE', table_columns=['DUID', 'BIDTYPE'], table_primary_keys=['DUID'],
...                  con=con)

Create the corresponding table in the sqlite database, note this step many not be needed if you have connected to an existing database.

>>> table.create_table_in_sqlite_db()

Now a table exists in the database, but its empty.

>>> print(pd.read_sql("Select * from example", con=con))
Empty DataFrame
Columns: [DUID, BIDTYPE]
Index: []

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
class nempy.historical_inputs.mms_db.InputsStartAndEnd(table_name, table_columns, table_primary_keys, con)

Manages retrieving dispatch inputs by START_DATE and END_DATE.

Methods:

get_data(date_time)

Retrieves data for the specified date_time by START_DATE and END_DATE.

create_table_in_sqlite_db()

Creates a table in the sqlite database that the object has a connection to.

set_data(year, month)

"Download data for the given table and time, replace any existing data.

get_data(date_time)

Retrieves data for the specified date_time by START_DATE and END_DATE.

Records with a START_DATE before or equal to the date_times and an END_DATE after the date_time will be returned.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = InputsStartAndEnd(table_name='EXAMPLE', table_columns=['START_DATE', 'END_DATE', 'INITIALMW'],
...                           table_primary_keys=['START_DATE'], con=con)

Create the table in the database.

>>> table.create_table_in_sqlite_db()

Normally you would use the add_data method to add historical data, but here we will add data directly to the database so some simple example data can be added.

>>> data = pd.DataFrame({
...   'START_DATE': ['2019/01/01 00:00:00', '2019/01/02 00:00:00'],
...   'END_DATE': ['2019/01/02 00:00:00', '2019/01/03 00:00:00'],
...   'INITIALMW': [1.0, 2.0]})
>>> _ = data.to_sql('EXAMPLE', con=con, if_exists='append', index=False)

When we call get_data the output is filtered by START_DATE and END_DATE.

>>> print(table.get_data(date_time='2019/01/01 00:00:00'))
            START_DATE             END_DATE  INITIALMW
0  2019/01/01 00:00:00  2019/01/02 00:00:00        1.0
>>> print(table.get_data(date_time='2019/01/01 12:00:00'))
            START_DATE             END_DATE  INITIALMW
0  2019/01/01 00:00:00  2019/01/02 00:00:00        1.0
>>> print(table.get_data(date_time='2019/01/02 00:00:00'))
            START_DATE             END_DATE  INITIALMW
0  2019/01/02 00:00:00  2019/01/03 00:00:00        2.0
>>> print(table.get_data(date_time='2019/01/02 00:12:00'))
            START_DATE             END_DATE  INITIALMW
0  2019/01/02 00:00:00  2019/01/03 00:00:00        2.0

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
Parameters:

date_time (str) – Should be of format ‘%Y/%m/%d %H:%M:%S’, and always a round 5 min interval e.g. 2019/01/01 11:55:00.

Return type:

pd.DataFrame

create_table_in_sqlite_db()

Creates a table in the sqlite database that the object has a connection to.

Note

This method and its documentation is inherited from the _MMSTable class.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = _MMSTable(table_name='EXAMPLE', table_columns=['DUID', 'BIDTYPE'], table_primary_keys=['DUID'],
...                  con=con)

Create the corresponding table in the sqlite database, note this step many not be needed if you have connected to an existing database.

>>> table.create_table_in_sqlite_db()

Now a table exists in the database, but its empty.

>>> print(pd.read_sql("Select * from example", con=con))
Empty DataFrame
Columns: [DUID, BIDTYPE]
Index: []

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
set_data(year, month)

“Download data for the given table and time, replace any existing data.

Note

This method and its documentation is inherited from the _SingleDataSource class.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = _SingleDataSource(table_name='DUDETAILSUMMARY',
...                          table_columns=['DUID', 'START_DATE', 'CONNECTIONPOINTID', 'REGIONID'],
...                          table_primary_keys=['START_DATE', 'DUID'], con=con)

Create the table in the database.

>>> table.create_table_in_sqlite_db()

Downloading data from http://nemweb.com.au/#mms-data-model into the table.

>>> table.set_data(year=2020, month=1)

Now the database should contain data for this table that is up to date as the end of Janurary.

>>> query = "Select * from DUDETAILSUMMARY order by START_DATE DESC limit 1;"
>>> print(pd.read_sql_query(query, con=con))
      DUID           START_DATE CONNECTIONPOINTID REGIONID
0  URANQ11  2020/02/04 00:00:00            NURQ1U     NSW1

However if we subsequently set data from a previous date then any existing data will be replaced. Note the change in the most recent record in the data set below.

>>> table.set_data(year=2019, month=1)
>>> print(pd.read_sql_query(query, con=con))
       DUID           START_DATE CONNECTIONPOINTID REGIONID
0  WEMENSF1  2019/03/04 00:00:00            VWES2W     VIC1

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
Parameters:
  • year (int) – The year to download data for.

  • month (int) – The month to download data for.

Return type:

None

class nempy.historical_inputs.mms_db.InputsByMatchDispatchConstraints(table_name, table_columns, table_primary_keys, con)

Manages retrieving dispatch inputs by matching against the DISPATCHCONSTRAINTS table

Methods:

get_data(date_time)

Retrieves data for the specified date_time by matching against the DISPATCHCONSTRAINT table.

create_table_in_sqlite_db()

Creates a table in the sqlite database that the object has a connection to.

set_data(year, month)

"Download data for the given table and time, replace any existing data.

get_data(date_time)

Retrieves data for the specified date_time by matching against the DISPATCHCONSTRAINT table.

First the DISPATCHCONSTRAINT table is filtered by SETTLEMENTDATE and then the contents of the classes table is matched against that.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = InputsByMatchDispatchConstraints(table_name='EXAMPLE',
...                           table_columns=['GENCONID', 'EFFECTIVEDATE', 'VERSIONNO', 'RHS'],
...                           table_primary_keys=['GENCONID', 'EFFECTIVEDATE', 'VERSIONNO'], con=con)

Create the table in the database.

>>> table.create_table_in_sqlite_db()

Normally you would use the set_data method to add historical data, but here we will add data directly to the database so some simple example data can be added.

>>> data = pd.DataFrame({
...   'GENCONID': ['X', 'X', 'Y', 'Y'],
...   'EFFECTIVEDATE': ['2019/01/02 00:00:00', '2019/01/03 00:00:00', '2019/01/01 00:00:00',
...                     '2019/01/03 00:00:00'],
...   'VERSIONNO': [1, 2, 2, 3],
...   'RHS': [1.0, 2.0, 2.0, 3.0]})
>>> _ = data.to_sql('EXAMPLE', con=con, if_exists='append', index=False)
>>> data = pd.DataFrame({
...   'SETTLEMENTDATE' : ['2019/01/02 00:00:00', '2019/01/02 00:00:00', '2019/01/03 00:00:00',
...                       '2019/01/03 00:00:00'],
...   'CONSTRAINTID': ['X', 'Y', 'X', 'Y'],
...   'GENCONID_EFFECTIVEDATE': ['2019/01/02 00:00:00', '2019/01/01 00:00:00', '2019/01/03 00:00:00',
...                              '2019/01/03 00:00:00'],
...   'GENCONID_VERSIONNO': [1, 2, 2, 3]})
>>> _ = data.to_sql('DISPATCHCONSTRAINT', con=con, if_exists='append', index=False)

When we call get_data the output is filtered by the contents of DISPATCHCONSTRAINT.

>>> print(table.get_data(date_time='2019/01/02 00:00:00'))
  GENCONID        EFFECTIVEDATE VERSIONNO  RHS
0        X  2019/01/02 00:00:00         1  1.0
1        Y  2019/01/01 00:00:00         2  2.0
>>> print(table.get_data(date_time='2019/01/03 00:00:00'))
  GENCONID        EFFECTIVEDATE VERSIONNO  RHS
0        X  2019/01/03 00:00:00         2  2.0
1        Y  2019/01/03 00:00:00         3  3.0

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
Parameters:

date_time (str) – Should be of format ‘%Y/%m/%d %H:%M:%S’, and always a round 5 min interval e.g. 2019/01/01 11:55:00.

Return type:

pd.DataFrame

create_table_in_sqlite_db()

Creates a table in the sqlite database that the object has a connection to.

Note

This method and its documentation is inherited from the _MMSTable class.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = _MMSTable(table_name='EXAMPLE', table_columns=['DUID', 'BIDTYPE'], table_primary_keys=['DUID'],
...                  con=con)

Create the corresponding table in the sqlite database, note this step many not be needed if you have connected to an existing database.

>>> table.create_table_in_sqlite_db()

Now a table exists in the database, but its empty.

>>> print(pd.read_sql("Select * from example", con=con))
Empty DataFrame
Columns: [DUID, BIDTYPE]
Index: []

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
set_data(year, month)

“Download data for the given table and time, replace any existing data.

Note

This method and its documentation is inherited from the _SingleDataSource class.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = _SingleDataSource(table_name='DUDETAILSUMMARY',
...                          table_columns=['DUID', 'START_DATE', 'CONNECTIONPOINTID', 'REGIONID'],
...                          table_primary_keys=['START_DATE', 'DUID'], con=con)

Create the table in the database.

>>> table.create_table_in_sqlite_db()

Downloading data from http://nemweb.com.au/#mms-data-model into the table.

>>> table.set_data(year=2020, month=1)

Now the database should contain data for this table that is up to date as the end of Janurary.

>>> query = "Select * from DUDETAILSUMMARY order by START_DATE DESC limit 1;"
>>> print(pd.read_sql_query(query, con=con))
      DUID           START_DATE CONNECTIONPOINTID REGIONID
0  URANQ11  2020/02/04 00:00:00            NURQ1U     NSW1

However if we subsequently set data from a previous date then any existing data will be replaced. Note the change in the most recent record in the data set below.

>>> table.set_data(year=2019, month=1)
>>> print(pd.read_sql_query(query, con=con))
       DUID           START_DATE CONNECTIONPOINTID REGIONID
0  WEMENSF1  2019/03/04 00:00:00            VWES2W     VIC1

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
Parameters:
  • year (int) – The year to download data for.

  • month (int) – The month to download data for.

Return type:

None

class nempy.historical_inputs.mms_db.InputsByEffectiveDateVersionNoAndDispatchInterconnector(table_name, table_columns, table_primary_keys, con)

Manages retrieving dispatch inputs by EFFECTTIVEDATE and VERSIONNO.

Methods:

get_data(date_time)

Retrieves data for the specified date_time by EFFECTTIVEDATE and VERSIONNO.

create_table_in_sqlite_db()

Creates a table in the sqlite database that the object has a connection to.

set_data(year, month)

"Download data for the given table and time, replace any existing data.

get_data(date_time)

Retrieves data for the specified date_time by EFFECTTIVEDATE and VERSIONNO.

For each unique record (by the remaining primary keys, not including EFFECTTIVEDATE and VERSIONNO) the record with the most recent EFFECTIVEDATE

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical_inputs.db')

Create the table object.

>>> table = InputsByEffectiveDateVersionNoAndDispatchInterconnector(table_name='EXAMPLE',
...                           table_columns=['INTERCONNECTORID', 'EFFECTIVEDATE', 'VERSIONNO', 'INITIALMW'],
...                           table_primary_keys=['INTERCONNECTORID', 'EFFECTIVEDATE', 'VERSIONNO'], con=con)

Create the table in the database.

>>> table.create_table_in_sqlite_db()

Normally you would use the set_data method to add historical_inputs data, but here we will add data directly to the database so some simple example data can be added.

>>> data = pd.DataFrame({
...   'INTERCONNECTORID': ['X', 'X', 'Y', 'Y'],
...   'EFFECTIVEDATE': ['2019/01/02 00:00:00', '2019/01/03 00:00:00', '2019/01/01 00:00:00',
...                     '2019/01/03 00:00:00'],
...   'VERSIONNO': [1, 2, 2, 3],
...   'INITIALMW': [1.0, 2.0, 2.0, 3.0]})
>>> _ = data.to_sql('EXAMPLE', con=con, if_exists='append', index=False)

We also need to add data to DISPATCHINTERCONNECTORRES because the results of the get_data method are filtered against this table

>>> data = pd.DataFrame({
...   'INTERCONNECTORID': ['X', 'X', 'Y'],
...   'SETTLEMENTDATE': ['2019/01/02 00:00:00', '2019/01/03 00:00:00', '2019/01/02 00:00:00']})
>>> _ = data.to_sql('DISPATCHINTERCONNECTORRES', con=con, if_exists='append', index=False)

When we call get_data the output is filtered by the contents of DISPATCHCONSTRAINT.

>>> print(table.get_data(date_time='2019/01/02 00:00:00'))
  INTERCONNECTORID        EFFECTIVEDATE VERSIONNO  INITIALMW
0                X  2019/01/02 00:00:00         1        1.0
1                Y  2019/01/01 00:00:00         2        2.0

In the next interval interconnector Y is not present in DISPATCHINTERCONNECTORRES.

>>> print(table.get_data(date_time='2019/01/03 00:00:00'))
  INTERCONNECTORID        EFFECTIVEDATE VERSIONNO  INITIALMW
0                X  2019/01/03 00:00:00         2        2.0

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical_inputs.db')
Parameters:

date_time (str) – Should be of format ‘%Y/%m/%d %H:%M:%S’, and always a round 5 min interval e.g. 2019/01/01 11:55:00.

Return type:

pd.DataFrame

create_table_in_sqlite_db()

Creates a table in the sqlite database that the object has a connection to.

Note

This method and its documentation is inherited from the _MMSTable class.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = _MMSTable(table_name='EXAMPLE', table_columns=['DUID', 'BIDTYPE'], table_primary_keys=['DUID'],
...                  con=con)

Create the corresponding table in the sqlite database, note this step many not be needed if you have connected to an existing database.

>>> table.create_table_in_sqlite_db()

Now a table exists in the database, but its empty.

>>> print(pd.read_sql("Select * from example", con=con))
Empty DataFrame
Columns: [DUID, BIDTYPE]
Index: []

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
set_data(year, month)

“Download data for the given table and time, replace any existing data.

Note

This method and its documentation is inherited from the _SingleDataSource class.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = _SingleDataSource(table_name='DUDETAILSUMMARY',
...                          table_columns=['DUID', 'START_DATE', 'CONNECTIONPOINTID', 'REGIONID'],
...                          table_primary_keys=['START_DATE', 'DUID'], con=con)

Create the table in the database.

>>> table.create_table_in_sqlite_db()

Downloading data from http://nemweb.com.au/#mms-data-model into the table.

>>> table.set_data(year=2020, month=1)

Now the database should contain data for this table that is up to date as the end of Janurary.

>>> query = "Select * from DUDETAILSUMMARY order by START_DATE DESC limit 1;"
>>> print(pd.read_sql_query(query, con=con))
      DUID           START_DATE CONNECTIONPOINTID REGIONID
0  URANQ11  2020/02/04 00:00:00            NURQ1U     NSW1

However if we subsequently set data from a previous date then any existing data will be replaced. Note the change in the most recent record in the data set below.

>>> table.set_data(year=2019, month=1)
>>> print(pd.read_sql_query(query, con=con))
       DUID           START_DATE CONNECTIONPOINTID REGIONID
0  WEMENSF1  2019/03/04 00:00:00            VWES2W     VIC1

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
Parameters:
  • year (int) – The year to download data for.

  • month (int) – The month to download data for.

Return type:

None

class nempy.historical_inputs.mms_db.InputsByEffectiveDateVersionNo(table_name, table_columns, table_primary_keys, con)

Manages retrieving dispatch inputs by EFFECTTIVEDATE and VERSIONNO.

Methods:

get_data(date_time)

Retrieves data for the specified date_time by EFFECTTIVEDATE and VERSIONNO.

create_table_in_sqlite_db()

Creates a table in the sqlite database that the object has a connection to.

set_data(year, month)

"Download data for the given table and time, replace any existing data.

get_data(date_time)

Retrieves data for the specified date_time by EFFECTTIVEDATE and VERSIONNO.

For each unique record (by the remaining primary keys, not including EFFECTTIVEDATE and VERSIONNO) the record with the most recent EFFECTIVEDATE

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = InputsByEffectiveDateVersionNo(table_name='EXAMPLE',
...                           table_columns=['DUID', 'EFFECTIVEDATE', 'VERSIONNO', 'INITIALMW'],
...                           table_primary_keys=['DUID', 'EFFECTIVEDATE', 'VERSIONNO'], con=con)

Create the table in the database.

>>> table.create_table_in_sqlite_db()

Normally you would use the set_data method to add historical data, but here we will add data directly to the database so some simple example data can be added.

>>> data = pd.DataFrame({
...   'DUID': ['X', 'X', 'Y', 'Y'],
...   'EFFECTIVEDATE': ['2019/01/02 00:00:00', '2019/01/03 00:00:00', '2019/01/01 00:00:00',
...                     '2019/01/03 00:00:00'],
...   'VERSIONNO': [1, 2, 2, 3],
...   'INITIALMW': [1.0, 2.0, 2.0, 3.0]})
>>> _ = data.to_sql('EXAMPLE', con=con, if_exists='append', index=False)

When we call get_data the output is filtered by most recent effective date and highest version no.

>>> print(table.get_data(date_time='2019/01/02 00:00:00'))
  DUID        EFFECTIVEDATE VERSIONNO  INITIALMW
0    X  2019/01/02 00:00:00         1        1.0
1    Y  2019/01/01 00:00:00         2        2.0

In the next interval interconnector Y is not present in DISPATCHINTERCONNECTORRES.

>>> print(table.get_data(date_time='2019/01/03 00:00:00'))
  DUID        EFFECTIVEDATE VERSIONNO  INITIALMW
0    X  2019/01/03 00:00:00         2        2.0
1    Y  2019/01/03 00:00:00         3        3.0

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
Parameters:

date_time (str) – Should be of format ‘%Y/%m/%d %H:%M:%S’, and always a round 5 min interval e.g. 2019/01/01 11:55:00.

Return type:

pd.DataFrame

create_table_in_sqlite_db()

Creates a table in the sqlite database that the object has a connection to.

Note

This method and its documentation is inherited from the _MMSTable class.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = _MMSTable(table_name='EXAMPLE', table_columns=['DUID', 'BIDTYPE'], table_primary_keys=['DUID'],
...                  con=con)

Create the corresponding table in the sqlite database, note this step many not be needed if you have connected to an existing database.

>>> table.create_table_in_sqlite_db()

Now a table exists in the database, but its empty.

>>> print(pd.read_sql("Select * from example", con=con))
Empty DataFrame
Columns: [DUID, BIDTYPE]
Index: []

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
set_data(year, month)

“Download data for the given table and time, replace any existing data.

Note

This method and its documentation is inherited from the _SingleDataSource class.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = _SingleDataSource(table_name='DUDETAILSUMMARY',
...                          table_columns=['DUID', 'START_DATE', 'CONNECTIONPOINTID', 'REGIONID'],
...                          table_primary_keys=['START_DATE', 'DUID'], con=con)

Create the table in the database.

>>> table.create_table_in_sqlite_db()

Downloading data from http://nemweb.com.au/#mms-data-model into the table.

>>> table.set_data(year=2020, month=1)

Now the database should contain data for this table that is up to date as the end of Janurary.

>>> query = "Select * from DUDETAILSUMMARY order by START_DATE DESC limit 1;"
>>> print(pd.read_sql_query(query, con=con))
      DUID           START_DATE CONNECTIONPOINTID REGIONID
0  URANQ11  2020/02/04 00:00:00            NURQ1U     NSW1

However if we subsequently set data from a previous date then any existing data will be replaced. Note the change in the most recent record in the data set below.

>>> table.set_data(year=2019, month=1)
>>> print(pd.read_sql_query(query, con=con))
       DUID           START_DATE CONNECTIONPOINTID REGIONID
0  WEMENSF1  2019/03/04 00:00:00            VWES2W     VIC1

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
Parameters:
  • year (int) – The year to download data for.

  • month (int) – The month to download data for.

Return type:

None

class nempy.historical_inputs.mms_db.InputsNoFilter(table_name, table_columns, table_primary_keys, con)

Manages retrieving dispatch inputs where no filter is require.

Methods:

get_data()

Retrieves all data in the table.

create_table_in_sqlite_db()

Creates a table in the sqlite database that the object has a connection to.

set_data(year, month)

"Download data for the given table and time, replace any existing data.

get_data()

Retrieves all data in the table.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical_inputs.db')

Create the table object.

>>> table = InputsNoFilter(table_name='EXAMPLE', table_columns=['DUID', 'INITIALMW'],
...                        table_primary_keys=['DUID'], con=con)

Create the table in the database.

>>> table.create_table_in_sqlite_db()

Normally you would use the set_data method to add historical_inputs data, but here we will add data directly to the database so some simple example data can be added.

>>> data = pd.DataFrame({
...   'DUID': ['X', 'Y'],
...   'INITIALMW': [1.0, 2.0]})
>>> _ = data.to_sql('EXAMPLE', con=con, if_exists='append', index=False)

When we call get_data all data in the table is returned.

>>> print(table.get_data())
  DUID  INITIALMW
0    X        1.0
1    Y        2.0

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical_inputs.db')
Return type:

pd.DataFrame

create_table_in_sqlite_db()

Creates a table in the sqlite database that the object has a connection to.

Note

This method and its documentation is inherited from the _MMSTable class.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = _MMSTable(table_name='EXAMPLE', table_columns=['DUID', 'BIDTYPE'], table_primary_keys=['DUID'],
...                  con=con)

Create the corresponding table in the sqlite database, note this step many not be needed if you have connected to an existing database.

>>> table.create_table_in_sqlite_db()

Now a table exists in the database, but its empty.

>>> print(pd.read_sql("Select * from example", con=con))
Empty DataFrame
Columns: [DUID, BIDTYPE]
Index: []

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
set_data(year, month)

“Download data for the given table and time, replace any existing data.

Note

This method and its documentation is inherited from the _SingleDataSource class.

Examples

>>> import sqlite3
>>> import os

Set up a database or connect to an existing one.

>>> con = sqlite3.connect('historical.db')

Create the table object.

>>> table = _SingleDataSource(table_name='DUDETAILSUMMARY',
...                          table_columns=['DUID', 'START_DATE', 'CONNECTIONPOINTID', 'REGIONID'],
...                          table_primary_keys=['START_DATE', 'DUID'], con=con)

Create the table in the database.

>>> table.create_table_in_sqlite_db()

Downloading data from http://nemweb.com.au/#mms-data-model into the table.

>>> table.set_data(year=2020, month=1)

Now the database should contain data for this table that is up to date as the end of Janurary.

>>> query = "Select * from DUDETAILSUMMARY order by START_DATE DESC limit 1;"
>>> print(pd.read_sql_query(query, con=con))
      DUID           START_DATE CONNECTIONPOINTID REGIONID
0  URANQ11  2020/02/04 00:00:00            NURQ1U     NSW1

However if we subsequently set data from a previous date then any existing data will be replaced. Note the change in the most recent record in the data set below.

>>> table.set_data(year=2019, month=1)
>>> print(pd.read_sql_query(query, con=con))
       DUID           START_DATE CONNECTIONPOINTID REGIONID
0  WEMENSF1  2019/03/04 00:00:00            VWES2W     VIC1

Clean up by closing the database and deleting if its no longer needed.

>>> con.close()
>>> os.remove('historical.db')
Parameters:
  • year (int) – The year to download data for.

  • month (int) – The month to download data for.

Return type:

None

loaders

Classes:

RawInputsLoader(nemde_xml_cache_manager, ...)

Provides single interface for accessing raw historical inputs.

class nempy.historical_inputs.loaders.RawInputsLoader(nemde_xml_cache_manager, market_management_system_database)

Provides single interface for accessing raw historical inputs.

Examples

>>> import sqlite3
>>> from nempy.historical_inputs import mms_db
>>> from nempy.historical_inputs import xml_cache

For the RawInputsLoader to work we need to construct a database and inputs cache for it to load inputs from and then pass the interfaces to these to the inputs loader.

>>> con = sqlite3.connect('market_management_system.db')
>>> mms_db_manager = mms_db.DBManager(connection=con)
>>> xml_cache_manager = xml_cache.XMLCacheManager('test_nemde_cache')

In this example the database and cache have already been populated so the input loader can be created straight away.

>>> inputs_loader = RawInputsLoader(xml_cache_manager, mms_db_manager)

Then we set the dispatch interval that we want to load inputs from.

>>> inputs_loader.set_interval('2019/01/01 00:00:00')

And then we can load some inputs.

>>> inputs_loader.get_unit_volume_bids()
         DUID     BIDTYPE  MAXAVAIL  ENABLEMENTMIN  ENABLEMENTMAX  LOWBREAKPOINT  HIGHBREAKPOINT  BANDAVAIL1  BANDAVAIL2  BANDAVAIL3  BANDAVAIL4  BANDAVAIL5  BANDAVAIL6  BANDAVAIL7  BANDAVAIL8  BANDAVAIL9  BANDAVAIL10  RAMPDOWNRATE  RAMPUPRATE
0      AGLHAL      ENERGY     173.0            0.0            0.0            0.0             0.0         0.0         0.0         0.0         0.0         0.0         0.0        60.0         0.0         0.0        160.0         720.0       720.0
1      AGLSOM      ENERGY     160.0            0.0            0.0            0.0             0.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0        170.0         480.0       480.0
2     ANGAST1      ENERGY      43.0            0.0            0.0            0.0             0.0         0.0         0.0         0.0         0.0         0.0        50.0         0.0         0.0         0.0         50.0         840.0       840.0
3       APD01   LOWER5MIN       0.0            0.0            0.0            0.0             0.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0        300.0           0.0         0.0
4       APD01  LOWER60SEC       0.0            0.0            0.0            0.0             0.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0        300.0           0.0         0.0
...       ...         ...       ...            ...            ...            ...             ...         ...         ...         ...         ...         ...         ...         ...         ...         ...          ...           ...         ...
1021    YWPS4   LOWER6SEC      25.0          250.0          385.0          275.0           385.0        15.0        10.0         0.0         0.0         0.0         0.0         0.0         0.0         0.0          0.0           0.0         0.0
1022    YWPS4   RAISE5MIN       0.0          250.0          390.0          250.0           380.0         0.0         0.0         0.0         0.0         5.0         0.0         0.0         5.0         0.0         10.0           0.0         0.0
1023    YWPS4    RAISEREG      15.0          250.0          385.0          250.0           370.0         0.0         0.0         0.0         0.0         0.0         0.0         5.0        10.0         0.0          5.0           0.0         0.0
1024    YWPS4  RAISE60SEC      10.0          220.0          400.0          220.0           390.0         0.0         0.0         0.0         0.0         0.0         5.0         5.0         0.0         0.0         10.0           0.0         0.0
1025    YWPS4   RAISE6SEC      15.0          220.0          405.0          220.0           390.0         0.0         0.0         0.0        10.0         5.0         0.0         0.0         0.0         0.0         10.0           0.0         0.0

[1026 rows x 19 columns]

Methods:

set_interval(interval)

Set the interval to load inputs for.

get_unit_initial_conditions()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_unit_initial_conditions

get_unit_volume_bids()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_unit_volume_bids

get_unit_price_bids()

Direct interface to nempy.historical_inputs.mms_db.DBManager.BIDDAYOFFER_D.get_data

get_unit_details()

Direct interface to nempy.historical_inputs.mms_db.DBManager.DUDETAILSUMMARY.get_data

get_agc_enablement_limits()

Direct interface to nempy.historical_inputs.mms_db.DBManager.DISPATCHLOAD.get_data

get_UIGF_values()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_UIGF_values

get_violations()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_violations

get_constraint_violation_prices()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_constraint_violation_prices

get_constraint_rhs()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_constraint_rhs

get_constraint_type()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_constraint_type

get_constraint_region_lhs()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_constraint_region_lhs

get_constraint_unit_lhs()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_constraint_unit_lhs

get_constraint_interconnector_lhs()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_constraint_interconnector_lhs

get_market_interconnectors()

Direct interface to nempy.historical_inputs.mms_db.DBManager.MNSP_INTERCONNECTOR.get_data

get_market_interconnector_link_bid_availability()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_market_interconnector_link_bid_availability

get_interconnector_constraint_parameters()

Direct interface to nempy.historical_inputs.mms_db.DBManager.INTERCONNECTORCONSTRAINT.get_data

get_interconnector_definitions()

Direct interface to nempy.historical_inputs.mms_db.DBManager.INTERCONNECTOR.get_data

get_regional_loads()

Direct interface to nempy.historical_inputs.mms_db.DBManager.DISPATCHREGIONSUM.get_data

get_interconnector_loss_segments()

Direct interface to nempy.historical_inputs.mms_db.DBManager.LOSSMODEL.get_data

get_interconnector_loss_parameters()

Direct interface to nempy.historical_inputs.mms_db.DBManager.LOSSFACTORMODEL.get_data

get_unit_fast_start_parameters()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_unit_fast_start_parameters

is_over_constrained_dispatch_rerun()

Checks if the over constrained dispatch rerun process was used by AEMO to dispatch this interval.

set_interval(interval)

Set the interval to load inputs for.

Examples

For an example see the class level documentation

Parameters:

interval (str) – In the format ‘%Y/%m/%d %H:%M:%S’

get_unit_initial_conditions()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_unit_initial_conditions

get_unit_volume_bids()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_unit_volume_bids

get_unit_price_bids()

Direct interface to nempy.historical_inputs.mms_db.DBManager.BIDDAYOFFER_D.get_data

get_unit_details()

Direct interface to nempy.historical_inputs.mms_db.DBManager.DUDETAILSUMMARY.get_data

get_agc_enablement_limits()

Direct interface to nempy.historical_inputs.mms_db.DBManager.DISPATCHLOAD.get_data

get_UIGF_values()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_UIGF_values

get_violations()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_violations

get_constraint_violation_prices()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_constraint_violation_prices

get_constraint_rhs()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_constraint_rhs

get_constraint_type()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_constraint_type

get_constraint_region_lhs()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_constraint_region_lhs

get_constraint_unit_lhs()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_constraint_unit_lhs

get_constraint_interconnector_lhs()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_constraint_interconnector_lhs

get_market_interconnectors()

Direct interface to nempy.historical_inputs.mms_db.DBManager.MNSP_INTERCONNECTOR.get_data

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_market_interconnector_link_bid_availability

get_interconnector_constraint_parameters()

Direct interface to nempy.historical_inputs.mms_db.DBManager.INTERCONNECTORCONSTRAINT.get_data

get_interconnector_definitions()

Direct interface to nempy.historical_inputs.mms_db.DBManager.INTERCONNECTOR.get_data

get_regional_loads()

Direct interface to nempy.historical_inputs.mms_db.DBManager.DISPATCHREGIONSUM.get_data

get_interconnector_loss_segments()

Direct interface to nempy.historical_inputs.mms_db.DBManager.LOSSMODEL.get_data

get_interconnector_loss_parameters()

Direct interface to nempy.historical_inputs.mms_db.DBManager.LOSSFACTORMODEL.get_data

get_unit_fast_start_parameters()

Direct interface to nempy.historical_inputs.xml_cache.XMLCacheManager.get_unit_fast_start_parameters

is_over_constrained_dispatch_rerun()

Checks if the over constrained dispatch rerun process was used by AEMO to dispatch this interval.

Examples

>>> import sqlite3
>>> from nempy.historical_inputs import mms_db
>>> from nempy.historical_inputs import xml_cache

For the RawInputsLoader to work we need to construct a database and inputs cache for it to load inputs from and then pass the interfaces to these to the inputs loader.

>>> con = sqlite3.connect('market_management_system.db')
>>> mms_db_manager = mms_db.DBManager(connection=con)
>>> xml_cache_manager = xml_cache.XMLCacheManager('test_nemde_cache')

In this example the database and cache have already been populated so the input loader can be created straight away.

>>> inputs_loader = RawInputsLoader(xml_cache_manager, mms_db_manager)

Then we set the dispatch interval that we want to load inputs from.

>>> inputs_loader.set_interval('2019/01/01 00:00:00')

And then we can load some inputs.

>>> inputs_loader.is_over_constrained_dispatch_rerun()
False
Return type:

bool

units

Exceptions:

MethodCallOrderError

Raise for calling methods in incompatible order.

Classes:

UnitData(raw_input_loader)

Loads unit related raw inputs and preprocess them for compatibility with nempy.markets.SpotMarket

exception nempy.historical_inputs.units.MethodCallOrderError

Raise for calling methods in incompatible order.

class nempy.historical_inputs.units.UnitData(raw_input_loader)

Loads unit related raw inputs and preprocess them for compatibility with nempy.markets.SpotMarket

Examples

This example shows the setup used for the examples in the class methods.

>>> import sqlite3
>>> from nempy.historical_inputs import mms_db
>>> from nempy.historical_inputs import xml_cache
>>> from nempy.historical_inputs import loaders

The UnitData class requries a RawInputsLoader instance.

>>> con = sqlite3.connect('market_management_system.db')
>>> mms_db_manager = mms_db.DBManager(connection=con)
>>> xml_cache_manager = xml_cache.XMLCacheManager('test_nemde_cache')
>>> inputs_loader = loaders.RawInputsLoader(xml_cache_manager, mms_db_manager)
>>> inputs_loader.set_interval('2019/01/10 12:05:00')

Create the UnitData instance.

>>> unit_data = UnitData(inputs_loader)
>>> unit_data.get_unit_bid_availability()
          unit  capacity
0       AGLHAL     170.0
1       AGLSOM     160.0
2      ANGAST1      44.0
23      BALBG1       0.0
33      BALBL1       0.0
...        ...       ...
989   YARWUN_1     165.0
990      YWPS1     380.0
999      YWPS2     180.0
1008     YWPS3     350.0
1017     YWPS4     340.0

[218 rows x 2 columns]

Methods:

get_unit_bid_availability()

Get the bid in maximum availability for scheduled units.

get_unit_uigf_limits()

Get the maximum availability predicted by the unconstrained intermittent generation forecast.

get_ramp_rates_used_for_energy_dispatch([...])

Get ramp rates used for constraining energy dispatch.

get_as_bid_ramp_rates()

Get ramp rates used as bid by units.

get_initial_unit_output()

Get unit outputs at the start of the dispatch interval.

get_fast_start_profiles_for_dispatch([...])

Get the parameters needed to construct the fast dispatch inflexibility profiles used for dispatch.

get_unit_info()

Get unit information.

get_processed_bids()

Get processed unit bids.

add_fcas_trapezium_constraints()

Load the fcas trapezium constraints into the UnitData class so subsequent method calls can access them.

get_fcas_max_availability()

Get the unit bid maximum availability of each service.

get_fcas_regulation_trapeziums()

Get the unit bid FCAS trapeziums for regulation services.

get_scada_ramp_down_rates_of_lower_reg_units([...])

Get the scada ramp down rates for unit with a lower regulation bid.

get_scada_ramp_up_rates_of_raise_reg_units([...])

Get the scada ramp up rates for unit with a raise regulation bid.

get_contingency_services()

Get the unit bid FCAS trapeziums for contingency services.

get_unit_bid_availability()

Get the bid in maximum availability for scheduled units.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)
>>> unit_data.get_unit_bid_availability()
          unit  capacity
0       AGLHAL     170.0
1       AGLSOM     160.0
2      ANGAST1      44.0
23      BALBG1       0.0
33      BALBL1       0.0
...        ...       ...
989   YARWUN_1     165.0
990      YWPS1     380.0
999      YWPS2     180.0
1008     YWPS3     350.0
1017     YWPS4     340.0

[218 rows x 2 columns]
Returns:

Columns:

Description:

unit

unique identifier for units, (as str)

capacity

unit bid in max availability, in MW,

(as str)

Return type:

pd.DataFrame

get_unit_uigf_limits()

Get the maximum availability predicted by the unconstrained intermittent generation forecast.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)
>>> unit_data.get_unit_uigf_limits()
        unit  capacity
0      ARWF1    18.654
1   BALDHWF1    11.675
2      BANN1    53.661
3     BLUFF1     8.655
4     BNGSF1    98.877
..       ...       ...
57     WGWF1     7.649
58   WHITSF1     6.075
59  WOODLWN1    11.659
60     WRSF1    20.000
61     WRWF1     7.180

[62 rows x 2 columns]
Returns:

Columns:

Description:

unit

unique identifier for units, (as str)

capacity

the forecast max availability, in MW,

(as str)

Return type:

pd.DataFrame

get_ramp_rates_used_for_energy_dispatch(run_type='no_fast_start_units')

Get ramp rates used for constraining energy dispatch.

The minimum of bid in ramp rates and scada telemetered ramp rates are used. If ‘no_fast_start_units’ is given as the run_type then no extra process is applied to the ramp rates based on the fast start inflexibility profiles. If ‘fast_start_first_run’ is given then the ramp rates of units starting in fast start modes 0, 1, and 2 are excluded. If ‘fast_start_second_run’ is given then the ramp rates of units ending the interval in fast start modes 0, 1, and 2 are excluded, and the ramp rates of units that started interval in mode 2 or smaller, but end in mode 3 or greater, have there ramp rates adjusted to account for speeding a portion of the interval constrained from ramping up by their dispatch inflexibility profile.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)
>>> unit_data.get_ramp_rates_used_for_energy_dispatch()
         unit  initial_output  ramp_up_rate  ramp_down_rate
0      AGLHAL        0.000000    720.000000      720.000000
1      AGLSOM        0.000000    480.000000      480.000000
2     ANGAST1        0.000000    840.000000      840.000000
3       ARWF1       15.800001   1200.000000      600.000000
4      BALBG1        0.000000   6000.000000     6000.000000
..        ...             ...           ...             ...
275  YARWUN_1      157.019989      0.000000        0.000000
276     YWPS1      383.959503    177.750006      177.750006
277     YWPS2      180.445572    177.750006      177.750006
278     YWPS3      353.460754    175.499997      175.499997
279     YWPS4      338.782288    180.000000      180.000000

[280 rows x 4 columns]
Parameters:

run_type (str specifying the run type should be one of 'no_fast_start_units', 'fast_start_first_run', or) – ‘fast_start_second_run’.

Returns:

Columns:

Description:

unit

unique identifier for units, (as str)

initial_output

the output/consumption of the unit at

the start of the dispatch interval,

in MW, (as np.float64)

ramp_up_rate

the ramp up rate, in MW/h,

(as np.float64)

ramp_down_rate

the ramp down rate, in MW/h,

(as np.float64)

Return type:

pd.DataFrame

get_as_bid_ramp_rates()

Get ramp rates used as bid by units.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)
>>> unit_data.get_as_bid_ramp_rates()
          unit  ramp_up_rate  ramp_down_rate
0       AGLHAL         720.0           720.0
1       AGLSOM         480.0           480.0
2      ANGAST1         840.0           840.0
9        ARWF1        1200.0           600.0
23      BALBG1        6000.0          6000.0
...        ...           ...             ...
989   YARWUN_1           0.0             0.0
990      YWPS1         180.0           180.0
999      YWPS2         180.0           180.0
1008     YWPS3         180.0           180.0
1017     YWPS4         180.0           180.0

[280 rows x 3 columns]
Returns:

Columns:

Description:

unit

unique identifier for units, (as str)

ramp_up_rate

the ramp up rate, in MW/h,

(as np.float64)

ramp_down_rate

the ramp down rate, in MW/h,

(as np.float64)

Return type:

pd.DataFrame

get_initial_unit_output()

Get unit outputs at the start of the dispatch interval.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)
>>> unit_data.get_initial_unit_output()
         unit  initial_output
0      AGLHAL        0.000000
1      AGLSOM        0.000000
2     ANGAST1        0.000000
3       APD01        0.000000
4       ARWF1       15.800001
..        ...             ...
283  YARWUN_1      157.019989
284     YWPS1      383.959503
285     YWPS2      180.445572
286     YWPS3      353.460754
287     YWPS4      338.782288

[288 rows x 2 columns]
Returns:

Columns:

Description:

unit

unique identifier for units, (as str)

initial_output

the output/consumption of the unit at

the start of the dispatch interval,

in MW, (as np.float64)

Return type:

pd.DataFrame

get_fast_start_profiles_for_dispatch(unconstrained_dispatch=None)

Get the parameters needed to construct the fast dispatch inflexibility profiles used for dispatch.

If the results of an non fast start constrained dispatch run are provided then these are used to commit fast start units starting the interval in mode zero, when the they have a non-zero dispatch result.

For more info on fast start dispatch inflexibility profiles see AEMO docs.

Returns:

Columns:

Description:

unit

unique identifier for units, (as str)

end_mode

the fast start mode the unit will end

the dispatch interval in, (as np.int64)

time_in_end_mode

the amount of time the unit will have

spend in the end mode at the end of the

dispatch interval, (as np.float64)

mode_two_length

the length the units mode two, in minutes

(as np.float64)

mode_four_length

the length the units mode four, in minutes

(as np.float64)

min_loading

the mininum opperating level of the unit

during mode three, in MW, (as no.float64)

Return type:

pd.DataFrame

get_unit_info()

Get unit information.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)
>>> unit_data.get_unit_info()
         unit region dispatch_type  loss_factor
0      AGLHAL    SA1     generator     0.971500
1     AGLNOW1   NSW1     generator     1.003700
2    AGLSITA1   NSW1     generator     1.002400
3      AGLSOM   VIC1     generator     0.984743
4     ANGAST1    SA1     generator     1.005674
..        ...    ...           ...          ...
477     YWNL1   VIC1     generator     0.957300
478     YWPS1   VIC1     generator     0.969600
479     YWPS2   VIC1     generator     0.957300
480     YWPS3   VIC1     generator     0.957300
481     YWPS4   VIC1     generator     0.957300

[482 rows x 4 columns]
Returns:

Columns:

Description:

unit

unique identifier for units, (as str)

region

the market region in which the unit is

located, (as str)

dispatch_type

whether the unit is a ‘generator’ or

’load’, (as str)

loss_factor

the combined unit transmission and

distribution loss_factor, (as np.float64)

Return type:

pd.DataFrame

get_processed_bids()

Get processed unit bids.

The bids are processed by scaling for AGC enablement limits, scaling for scada ramp rates, scaling for the unconstrained intermittent generation forecast and enforcing the preconditions for enabling FCAS bids. For more info on these processes see AEMO docs.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)
>>> volume_bids, price_bids = unit_data.get_processed_bids()
>>> volume_bids
        unit    service    1      2    3     4    5     6     7     8    9     10
0     AGLHAL     energy  0.0    0.0  0.0   0.0  0.0   0.0  60.0   0.0  0.0  160.0
1     AGLSOM     energy  0.0    0.0  0.0   0.0  0.0   0.0   0.0   0.0  0.0  170.0
2    ANGAST1     energy  0.0    0.0  0.0   0.0  0.0  50.0   0.0   0.0  0.0   50.0
9      ARWF1     energy  0.0  241.0  0.0   0.0  0.0   0.0   0.0   0.0  0.0    0.0
23    BALBG1     energy  0.0    0.0  0.0   0.0  0.0   0.0   0.0   0.0  0.0   30.0
..       ...        ...  ...    ...  ...   ...  ...   ...   ...   ...  ...    ...
364    YWPS4   raise_6s  0.0    0.0  0.0  10.0  5.0   0.0   0.0   0.0  0.0   10.0
365    YWPS4  lower_reg  0.0    0.0  0.0   0.0  0.0   0.0   0.0  20.0  0.0    0.0
366    YWPS4  raise_reg  0.0    0.0  0.0   0.0  0.0   0.0   5.0  10.0  0.0    5.0
369   SWAN_E  lower_reg  0.0    0.0  0.0   0.0  0.0   0.0   5.0   0.0  0.0   52.0
370   SWAN_E  raise_reg  0.0    0.0  0.0   5.0  0.0   0.0   3.0   0.0  0.0   49.0

[591 rows x 12 columns]
>>> price_bids
        unit     service           1          2           3           4           5           6           7            8             9            10
0     AGLHAL      energy  -971.50000   0.000000  270.863915  358.298915  406.873915  484.593915  562.313915  1326.641540  10277.372205  13600.018785
1     AGLSOM      energy  -984.74292   0.000000   83.703148  108.321721  142.787723  279.666989  444.119057   985.727663  13097.937562  14278.732950
2    ANGAST1      energy -1005.67390   0.000000  125.709237  201.335915  300.887574  382.135969  593.337544  1382.650761  10678.245470  14582.271550
3      ARWF1      energy  -969.10000 -63.001191    1.996346    4.002383    8.004766   15.999841   31.999682    63.999364    127.998728  14051.950000
4     BALBG1      energy  -994.80000   0.000000   19.915896   47.372376   75.177036  109.447896  298.440000   443.133660  10047.489948  14424.600000
..       ...         ...         ...        ...         ...         ...         ...         ...         ...          ...           ...           ...
586  ASQENC1    raise_6s     0.03000   0.300000    0.730000    0.990000    1.980000    5.000000    9.900000    17.700000    100.000000  10000.000000
587  ASTHYD1    raise_6s     0.00000   0.490000    1.450000    4.950000    9.950000   15.000000   60.000000   200.000000   1000.000000  14000.000000
588   VENUS1  raise_5min     0.00000   1.000000    2.780000    3.980000    4.980000    8.600000    9.300000    14.600000     20.000000   1000.000000
589   VENUS1   raise_60s     0.00000   1.000000    2.780000    3.980000    4.980000    8.600000    9.300000    14.600000     20.000000   1000.000000
590   VENUS1    raise_6s     0.01000   0.600000    2.780000    3.980000    4.980000    8.600000    9.300000    14.000000     20.000000   1000.000000

[591 rows x 12 columns]

Multiple Returns

volume_bids : pd.DataFrame

Columns:

Description:

unit

unique identifier for units, (as str)

service

the service the bid applies to, (as str)

1

the volume bid the first bid band, in MW,

(as np.float64)

:

10

the volume in the tenth bid band, in MW,

(as np.float64)

price_bids : pd.DataFrame

Columns:

Description:

unit

unique identifier for units, (as str)

service

the service the bid applies to, (as str)

1

the price of the first bid band, in MW,

(as np.float64)

:

10

the price of the the tenth bid band, in MW,

(as np.float64)

add_fcas_trapezium_constraints()

Load the fcas trapezium constraints into the UnitData class so subsequent method calls can access them.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)

If we try and call add_fcas_trapezium_constraints before calling get_processed_bids we get an error.

>>> unit_data.add_fcas_trapezium_constraints()
Traceback (most recent call last):
   ...
nempy.historical_inputs.units.MethodCallOrderError: Call get_processed_bids before add_fcas_trapezium_constraints.

After calling get_processed_bids it goes away.

>>> volume_bids, price_bids =  unit_data.get_processed_bids()
>>> unit_data.add_fcas_trapezium_constraints()

If we try and access the trapezium constraints before calling this method we get an error.

>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)
>>> unit_data.get_fcas_max_availability()
Traceback (most recent call last):
   ...
nempy.historical_inputs.units.MethodCallOrderError: Call add_fcas_trapezium_constraints before get_fcas_max_availability.

After calling it the error goes away.

>>> volume_bids, price_bids = unit_data.get_processed_bids()
>>> unit_data.add_fcas_trapezium_constraints()
>>> unit_data.get_fcas_max_availability()
        unit     service  max_availability
0      APD01  raise_5min              34.0
1      APD01   raise_60s              34.0
2      APD01    raise_6s              17.0
3    ASNENC1  raise_5min              12.0
4    ASNENC1   raise_60s               4.0
..       ...         ...               ...
364    YWPS4    raise_6s              15.0
365    YWPS4   lower_reg              15.0
366    YWPS4   raise_reg              15.0
369   SWAN_E   lower_reg              10.0
370   SWAN_E   raise_reg              25.0

[311 rows x 3 columns]
Return type:

None

Raises:

MethodCallOrderError – if called before get_processed_bids

get_fcas_max_availability()

Get the unit bid maximum availability of each service.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)

Required calls before calling get_fcas_max_availability.

>>> volume_bids, price_bids =  unit_data.get_processed_bids()
>>> unit_data.add_fcas_trapezium_constraints()

Now facs max availibility can be accessed.

>>> unit_data.get_fcas_max_availability()
        unit     service  max_availability
0      APD01  raise_5min              34.0
1      APD01   raise_60s              34.0
2      APD01    raise_6s              17.0
3    ASNENC1  raise_5min              12.0
4    ASNENC1   raise_60s               4.0
..       ...         ...               ...
364    YWPS4    raise_6s              15.0
365    YWPS4   lower_reg              15.0
366    YWPS4   raise_reg              15.0
369   SWAN_E   lower_reg              10.0
370   SWAN_E   raise_reg              25.0

[311 rows x 3 columns]
Returns:

Columns:

Description:

unit

unique identifier for units, (as str)

service

the service the bid applies to, (as str)

max_availability

the unit bid maximum availability, in MW,

(as np.float64)

Return type:

pd.DataFrame

Raises:

MethodCallOrderError – if the method is called before add_fcas_trapezium_constraints.

get_fcas_regulation_trapeziums()

Get the unit bid FCAS trapeziums for regulation services.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)

Required calls before calling get_fcas_regulation_trapeziums.

>>> volume_bids, price_bids =  unit_data.get_processed_bids()
>>> unit_data.add_fcas_trapezium_constraints()

Now facs max availibility can be accessed.

>>> unit_data.get_fcas_regulation_trapeziums()
         unit    service  max_availability  enablement_min  low_break_point  high_break_point  enablement_max
16       BW01  lower_reg         35.015640       309.27185       344.287490         520.80701       520.80701
17       BW01  raise_reg         35.015640       309.27185       309.271850         485.79137       520.80701
24   CALL_B_1  lower_reg         15.000000       180.00000       195.000000         270.30002       270.30002
25   CALL_B_1  raise_reg         15.000000       180.00000       180.000000         205.00000       220.00000
55       ER01  lower_reg         24.906273       490.02502       514.931293         680.00000       680.00000
..        ...        ...               ...             ...              ...               ...             ...
359     YWPS3  raise_reg         14.625000       250.00000       250.000000         370.37500       385.00000
365     YWPS4  lower_reg         15.000000       250.00000       265.000000         385.00000       385.00000
366     YWPS4  raise_reg         15.000000       250.00000       250.000000         370.00000       385.00000
369    SWAN_E  lower_reg         10.000000       145.00000       202.000000         362.50000       362.50000
370    SWAN_E  raise_reg         25.000000       145.00000       145.000000         305.50000       362.50000

[75 rows x 7 columns]
Returns:

Columns:

Description:

unit

unique identifier of a dispatch unit,

(as str)

service

the regulation service being offered,

(as str)

max_availability

the maximum volume of the contingency

service, in MW, (as np.float64)

enablement_min

the energy dispatch level at which

the unit can begin to provide

the regulation service, in MW,

(as np.float64)

low_break_point

the energy dispatch level at which

the unit can provide the full

regulation service offered, in MW,

(as np.float64)

high_break_point

the energy dispatch level at which the

unit can no longer provide the

full regulation service offered, in MW,

(as np.float64)

enablement_max

the energy dispatch level at which the

unit can no longer provide any

regulation service, in MW,

(as np.float64)

Return type:

pd.DataFrame

Raises:

MethodCallOrderError – if the method is called before add_fcas_trapezium_constraints.

get_scada_ramp_down_rates_of_lower_reg_units(run_type='no_fast_start_units')

Get the scada ramp down rates for unit with a lower regulation bid.

Only units with scada ramp rates and a lower regulation bid that passes enablement criteria are returned.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)

Required calls before calling get_scada_ramp_down_rates_of_lower_reg_units.

>>> volume_bids, price_bids =  unit_data.get_processed_bids()
>>> unit_data.add_fcas_trapezium_constraints()

Now the method can be called.

>>> unit_data.get_scada_ramp_down_rates_of_lower_reg_units().head()
        unit  initial_output  ramp_down_rate
36      BW01      425.125000      420.187683
40  CALL_B_1      219.699997      240.000000
74      ER01      636.000000      298.875275
76      ER03      678.925049      297.187500
77      ER04      518.550049      298.312225
Returns:

Columns:

Description:

unit

unique identifier for units, (as str)

initial_output

the output/consumption of the unit at

the start of the dispatch interval,

in MW, (as np.float64)

ramp_down_rate

the ramp down rate, in MW/h,

(as np.float64)

Return type:

pd.DataFrame

Raises:

MethodCallOrderError – if the method is called before add_fcas_trapezium_constraints.

get_scada_ramp_up_rates_of_raise_reg_units(run_type='no_fast_start_units')

Get the scada ramp up rates for unit with a raise regulation bid.

Only units with scada ramp rates and a raise regulation bid that passes enablement criteria are returned.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)

Required calls before calling get_scada_ramp_up_rates_of_raise_reg_units.

>>> volume_bids, price_bids =  unit_data.get_processed_bids()
>>> unit_data.add_fcas_trapezium_constraints()

Now the method can be called.

>>> unit_data.get_scada_ramp_up_rates_of_raise_reg_units().head()
        unit  initial_output  ramp_up_rate
36      BW01      425.125000    420.187683
40  CALL_B_1      219.699997    240.000000
74      ER01      636.000000    299.999542
76      ER03      678.925049    297.750092
77      ER04      518.550049    298.875275
Returns:

Columns:

Description:

unit

unique identifier for units, (as str)

initial_output

the output/consumption of the unit at

the start of the dispatch interval,

in MW, (as np.float64)

ramp_up_rate

the ramp up rate, in MW/h,

(as np.float64)

Return type:

pd.DataFrame

Raises:

MethodCallOrderError – if the method is called before add_fcas_trapezium_constraints.

get_contingency_services()

Get the unit bid FCAS trapeziums for contingency services.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)

Required calls before calling get_contingency_services.

>>> volume_bids, price_bids =  unit_data.get_processed_bids()
>>> unit_data.add_fcas_trapezium_constraints()

Now facs max availibility can be accessed.

>>> unit_data.get_contingency_services()
        unit     service  max_availability  enablement_min  low_break_point  high_break_point  enablement_max
0      APD01  raise_5min              34.0             0.0              0.0               0.0             0.0
1      APD01   raise_60s              34.0             0.0              0.0               0.0             0.0
2      APD01    raise_6s              17.0             0.0              0.0               0.0             0.0
3    ASNENC1  raise_5min              12.0             0.0              0.0               0.0             0.0
4    ASNENC1   raise_60s               4.0             0.0              0.0               0.0             0.0
..       ...         ...               ...             ...              ...               ...             ...
360    YWPS4  lower_5min              15.0           250.0            265.0             385.0           385.0
361    YWPS4   lower_60s              20.0           250.0            270.0             385.0           385.0
362    YWPS4    lower_6s              25.0           250.0            275.0             385.0           385.0
363    YWPS4   raise_60s              10.0           220.0            220.0             390.0           400.0
364    YWPS4    raise_6s              15.0           220.0            220.0             390.0           405.0

[236 rows x 7 columns]
Returns:

Columns:

Description:

unit

unique identifier of a dispatch unit,

(as str)

service

the contingency service being offered,

(as str)

max_availability

the maximum volume of the contingency

service, in MW, (as np.float64)

enablement_min

the energy dispatch level at which

the unit can begin to provide

the regulation service, in MW,

(as np.float64)

low_break_point

the energy dispatch level at which

the unit can provide the full

regulation service offered, in MW,

(as np.float64)

high_break_point

the energy dispatch level at which the

unit can no longer provide the

full regulation service offered, in MW,

(as np.float64)

enablement_max

the energy dispatch level at which the

unit can no longer provide any

regulation service, in MW,

(as np.float64)

Return type:

pd.DataFrame

Raises:

MethodCallOrderError – if the method is called before add_fcas_trapezium_constraints.

interconnectors

Classes:

InterconnectorData(raw_input_loader)

Loads interconnector related raw inputs and preprocess them for compatibility with nempy.markets.SpotMarket

Functions:

create_loss_functions(...)

Creates a loss function for each interconnector.

class nempy.historical_inputs.interconnectors.InterconnectorData(raw_input_loader)

Loads interconnector related raw inputs and preprocess them for compatibility with nempy.markets.SpotMarket

Examples

This example shows the setup used for the examples in the class methods. This setup is used to create a RawInputsLoader by calling the function _test_setup.

>>> import sqlite3
>>> from nempy.historical_inputs import mms_db
>>> from nempy.historical_inputs import xml_cache
>>> from nempy.historical_inputs import loaders

The InterconnectorData class requries a RawInputsLoader instance.

>>> con = sqlite3.connect('market_management_system.db')
>>> mms_db_manager = mms_db.DBManager(connection=con)
>>> xml_cache_manager = xml_cache.XMLCacheManager('test_nemde_cache')
>>> inputs_loader = loaders.RawInputsLoader(xml_cache_manager, mms_db_manager)
>>> inputs_loader.set_interval('2019/01/10 12:05:00')

Create a InterconnectorData instance.

>>> interconnector_data = InterconnectorData(inputs_loader)
>>> interconnector_data.get_interconnector_definitions()
  interconnector from_region to_region     min     max       link  from_region_loss_factor  to_region_loss_factor  generic_constraint_factor
0           V-SA        VIC1       SA1  -850.0   950.0       V-SA                   1.0000                 1.0000                          1
1      N-Q-MNSP1        NSW1      QLD1  -264.0   264.0  N-Q-MNSP1                   1.0000                 1.0000                          1
2      NSW1-QLD1        NSW1      QLD1 -1659.0  1229.0  NSW1-QLD1                   1.0000                 1.0000                          1
3      V-S-MNSP1        VIC1       SA1  -270.0   270.0  V-S-MNSP1                   1.0000                 1.0000                          1
5      VIC1-NSW1        VIC1      NSW1 -2299.0  2399.0  VIC1-NSW1                   1.0000                 1.0000                          1
0      T-V-MNSP1        TAS1      VIC1     0.0   478.0    BLNKTAS                   1.0000                 0.9839                          1
1      T-V-MNSP1        VIC1      TAS1     0.0   478.0    BLNKVIC                   0.9839                 1.0000                         -1
Parameters:

inputs_manager (historical_spot_market_inputs.DBManager) –

Methods:

get_interconnector_loss_model()

Returns inputs in the format needed to set interconnector losses in the SpotMarket class.

get_interconnector_definitions()

Returns inputs in the format needed to create interconnectors in the SpotMarket class.

get_interconnector_loss_model()

Returns inputs in the format needed to set interconnector losses in the SpotMarket class.

Examples

>>> inputs_loader = _test_setup()
>>> interconnector_data = InterconnectorData(inputs_loader)
>>> loss_function, interpolation_break_points =              interconnector_data.get_interconnector_loss_model()
>>> print(loss_function)
  interconnector       link                                      loss_function  from_region_loss_share
0           V-SA       V-SA  <function InterconnectorData.get_interconnecto...                    0.78
1      N-Q-MNSP1  N-Q-MNSP1  <function InterconnectorData.get_interconnecto...                    0.66
2      NSW1-QLD1  NSW1-QLD1  <function InterconnectorData.get_interconnecto...                    0.68
3      V-S-MNSP1  V-S-MNSP1  <function InterconnectorData.get_interconnecto...                    0.67
4      VIC1-NSW1  VIC1-NSW1  <function InterconnectorData.get_interconnecto...                    0.32
5      T-V-MNSP1    BLNKTAS  <function InterconnectorData.get_interconnecto...                    1.00
6      T-V-MNSP1    BLNKVIC  <function InterconnectorData.get_interconnecto...                    1.00
>>> print(interpolation_break_points)
    interconnector     link  loss_segment  break_point
0             V-SA     V-SA             1       -851.0
1             V-SA     V-SA             2       -835.0
2             V-SA     V-SA             3       -820.0
3             V-SA     V-SA             4       -805.0
4             V-SA     V-SA             5       -790.0
..             ...      ...           ...          ...
599      T-V-MNSP1  BLNKVIC           -80       -546.0
600      T-V-MNSP1  BLNKVIC           -81       -559.0
601      T-V-MNSP1  BLNKVIC           -82       -571.0
602      T-V-MNSP1  BLNKVIC           -83       -583.0
603      T-V-MNSP1  BLNKVIC           -84       -595.0

[604 rows x 4 columns]

Multiple Returns

loss_functions : pd.DataFrame

Columns:

Description:

interconnector

unique identifier of a interconnector,

(as str)

from_region_loss_share

The fraction of loss occuring in

the from region, 0.0 to 1.0,

(as np.float64)

loss_function

A function that takes a flow,

in MW as a float and returns the

losses in MW, (as callable)

interpolation_break_points : pd.DataFrame

Columns:

Description:

interconnector

unique identifier of a interconnector,

(as str)

loss_segment

unique identifier of a loss segment on

an interconnector basis, (as np.float64)

break_point

points between which the loss function

will be linearly interpolated, in MW

(as np.float64)

get_interconnector_definitions()

Returns inputs in the format needed to create interconnectors in the SpotMarket class.

Examples

>>> inputs_loader = _test_setup()
>>> interconnector_data = InterconnectorData(inputs_loader)
>>> interconnector_data.get_interconnector_definitions()
  interconnector from_region to_region     min     max       link  from_region_loss_factor  to_region_loss_factor  generic_constraint_factor
0           V-SA        VIC1       SA1  -850.0   950.0       V-SA                   1.0000                 1.0000                          1
1      N-Q-MNSP1        NSW1      QLD1  -264.0   264.0  N-Q-MNSP1                   1.0000                 1.0000                          1
2      NSW1-QLD1        NSW1      QLD1 -1659.0  1229.0  NSW1-QLD1                   1.0000                 1.0000                          1
3      V-S-MNSP1        VIC1       SA1  -270.0   270.0  V-S-MNSP1                   1.0000                 1.0000                          1
5      VIC1-NSW1        VIC1      NSW1 -2299.0  2399.0  VIC1-NSW1                   1.0000                 1.0000                          1
0      T-V-MNSP1        TAS1      VIC1     0.0   478.0    BLNKTAS                   1.0000                 0.9839                          1
1      T-V-MNSP1        VIC1      TAS1     0.0   478.0    BLNKVIC                   0.9839                 1.0000                         -1
Returns:

Columns:

Description:

interconnector

unique identifier of a interconnector,

(as str)

to_region

the region that receives power

when flow is in the positive

direction, (as str)

from_region

the region that power is drawn

from when flow is in the

positive direction, (as str)

max

the maximum power flow on the

interconnector, in MW (as np.float64)

min

the minimum power flow on the

interconnector, if power can flow

neative direction then this will be

negative, in MW (as np.float64)

from_region_loss_factor

the loss factor between the from

end of the interconnector and the

regional reference node,

(as np.float)

to_region_loss_factor

the loss factor between the to

end of the interconnector and

the regional reference node,

(as np.float)

Return type:

pd.DataFrame

nempy.historical_inputs.interconnectors.create_loss_functions(interconnector_coefficients, demand_coefficients, demand)

Creates a loss function for each interconnector.

Transforms the dynamic demand dependendent interconnector loss functions into functions that only depend on interconnector flow. i.e takes the function f and creates g by pre-calculating the demand dependent terms.

f(inter_flow, flow_coefficient, nsw_demand, nsw_coefficient, qld_demand, qld_coefficient) = inter_losses

becomes

g(inter_flow) = inter_losses

The mathematics of the demand dependent loss functions is described in the Marginal Loss Factors documentation section 3 to 5.

Examples

>>> import pandas as pd

Some arbitrary regional demands.

>>> demand = pd.DataFrame({
...   'region': ['VIC1', 'NSW1', 'QLD1', 'SA1'],
...   'loss_function_demand': [6000.0 , 7000.0, 5000.0, 3000.0]})

Loss model details from 2020 Jan NEM web LOSSFACTORMODEL file

>>> demand_coefficients = pd.DataFrame({
...   'interconnector': ['NSW1-QLD1', 'NSW1-QLD1', 'VIC1-NSW1',
...                      'VIC1-NSW1', 'VIC1-NSW1'],
...   'region': ['NSW1', 'QLD1', 'NSW1', 'VIC1', 'SA1'],
...   'demand_coefficient': [-0.00000035146, 0.000010044,
...                           0.000021734, -0.000031523,
...                          -0.000065967]})

Loss model details from 2020 Jan NEM web INTERCONNECTORCONSTRAINT file

>>> interconnector_coefficients = pd.DataFrame({
...   'interconnector': ['NSW1-QLD1', 'VIC1-NSW1'],
...   'loss_constant': [0.9529, 1.0657],
...   'flow_coefficient': [0.00019617, 0.00017027],
...   'from_region_loss_share': [0.5, 0.5]})

Create the loss functions

>>> loss_functions = create_loss_functions(interconnector_coefficients,
...                                        demand_coefficients, demand)

Lets use one of the loss functions, first get the loss function of VIC1-NSW1 and call it g

>>> g = loss_functions[loss_functions['interconnector'] == 'VIC1-NSW1']['loss_function'].iloc[0]

Calculate the losses at 600 MW flow

>>> print(g(600.0))
-70.87199999999996

Now for NSW1-QLD1

>>> h = loss_functions[loss_functions['interconnector'] == 'NSW1-QLD1']['loss_function'].iloc[0]
>>> print(h(600.0))
35.70646799999993
Parameters:
  • interconnector_coefficients (pd.DataFrame) –

    Columns:

    Description:

    interconnector

    unique identifier of a interconnector,

    (as str)

    loss_constant

    the constant term in the interconnector

    loss factor equation, (as np.float64)

    flow_coefficient

    the coefficient of the interconnector

    flow variable in the loss factor equation

    (as np.float64)

    from_region_loss_share

    the proportion of loss attribute to the

    from region, remainer are attributed to

    the to region, (as np.float64)

  • demand_coefficients (pd.DataFrame) –

    Columns:

    Description:

    interconnector

    unique identifier of a interconnector,

    (as str)

    region

    the market region whose demand the coefficient

    applies too (as str)

    demand_coefficient

    the coefficient of regional demand variable

    in the loss factor equation, (as np.float64)

  • demand (pd.DataFrame) –

    Columns:

    Description:

    region

    unique identifier of a region, (as str)

    loss_function_demand

    the estimated regional demand, as calculated

    by initial supply + demand forecast,

    in MW (as np.float64)

Returns:

loss_functions

Columns:

Description:

interconnector

unique identifier of a interconnector, (as str)

loss_function

a function object that takes interconnector

flow (as float) an input and returns

interconnector losses (as float).

Return type:

pd.DataFrame

demand

Classes:

DemandData(raw_inputs_loader)

Loads demand related raw data and preprocess it for complatibility with the SpotMarket class.

class nempy.historical_inputs.demand.DemandData(raw_inputs_loader)

Loads demand related raw data and preprocess it for complatibility with the SpotMarket class.

Examples

The DemandData class requries a RawInputsLoader instance.

>>> import sqlite3
>>> from nempy.historical_inputs import mms_db
>>> from nempy.historical_inputs import xml_cache
>>> from nempy.historical_inputs import loaders
>>> con = sqlite3.connect('market_management_system.db')
>>> mms_db_manager = mms_db.DBManager(connection=con)
>>> xml_cache_manager = xml_cache.XMLCacheManager('test_nemde_cache')
>>> inputs_loader = loaders.RawInputsLoader(xml_cache_manager, mms_db_manager)
>>> inputs_loader.set_interval('2019/01/10 12:05:00')
>>> demand_data = DemandData(inputs_loader)
>>> demand_data.get_operational_demand()
  region   demand
0   NSW1  8540.33
1   QLD1  7089.69
2    SA1  1019.21
3   TAS1  1070.89
4   VIC1  4500.71
Parameters:

raw_inputs_loader

Methods:

get_operational_demand()

Get the operational demand used to determine the regional energy dispatch constraints.

get_operational_demand()

Get the operational demand used to determine the regional energy dispatch constraints.

Examples

See class level example.

Returns:

Columns:

Description:

region

unique identifier of a market region,

(as str)

demand

the non dispatchable demand the region,

in MW, (as np.float64)

loss_function_demand

the measure of demand used when creating

interconnector loss functions, in MW,

(as np.float64)

Return type:

pd.DataFrame

constraints

Classes:

ConstraintData(raw_inputs_loader)

Loads generic constraint related raw inputs and preprocess them for compatibility with nempy.markets.SpotMarket

class nempy.historical_inputs.constraints.ConstraintData(raw_inputs_loader)

Loads generic constraint related raw inputs and preprocess them for compatibility with nempy.markets.SpotMarket

Examples

This example shows the setup used for the examples in the class methods. This setup is used to create a RawInputsLoader by calling the function _test_setup.

>>> import sqlite3
>>> from nempy.historical_inputs import mms_db
>>> from nempy.historical_inputs import xml_cache
>>> from nempy.historical_inputs import loaders

The InterconnectorData class requries a RawInputsLoader instance.

>>> con = sqlite3.connect('market_management_system.db')
>>> mms_db_manager = mms_db.DBManager(connection=con)
>>> xml_cache_manager = xml_cache.XMLCacheManager('test_nemde_cache')
>>> inputs_loader = loaders.RawInputsLoader(xml_cache_manager, mms_db_manager)
>>> inputs_loader.set_interval('2019/01/01 00:00:00')

Create a InterconnectorData instance.

>>> constraint_data = ConstraintData(inputs_loader)
>>> constraint_data.get_rhs_and_type_excluding_regional_fcas_constraints()
                     set           rhs type
0               #BANN1_E     32.000000   <=
1              #BNGSF2_E      3.000000   <=
2            #CROWLWF1_E     43.000000   <=
3             #CSPVPS1_E     29.000000   <=
4             #DAYDSF1_E      0.000000   <=
..                   ...           ...  ...
704          V_OWF_NRB_0  10000.001000   <=
705  V_OWF_TGTSNRBHTN_30  10030.000000   <=
706        V_S_NIL_ROCOF    812.280029   <=
707          V_T_NIL_BL1    478.000000   <=
708        V_T_NIL_FCSPS    425.154024   <=

[574 rows x 3 columns]
Parameters:

inputs_manager (historical_spot_market_inputs.DBManager) –

Methods:

get_rhs_and_type_excluding_regional_fcas_constraints()

Get the rhs values and types for generic constraints, excludes regional FCAS constraints.

get_rhs_and_type()

Get the rhs values and types for generic constraints.

get_unit_lhs()

Get the lhs coefficients of units.

get_interconnector_lhs()

Get the lhs coefficients of interconnectors.

get_region_lhs()

Get the lhs coefficients of regions.

get_fcas_requirements()

Get constraint details needed for setting FCAS requirements.

get_violation_costs()

Get the violation costs for generic constraints.

get_constraint_violation_prices()

Get the violation costs of non-generic constraint groups.

is_over_constrained_dispatch_rerun()

Get a boolean indicating if the over constrained dispatch rerun process was used for this interval.

get_rhs_and_type_excluding_regional_fcas_constraints()

Get the rhs values and types for generic constraints, excludes regional FCAS constraints.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = ConstraintData(inputs_loader)
>>> unit_data.get_rhs_and_type_excluding_regional_fcas_constraints()
                     set           rhs type
0               #BANN1_E     32.000000   <=
1              #BNGSF2_E      3.000000   <=
2            #CROWLWF1_E     43.000000   <=
3             #CSPVPS1_E     29.000000   <=
4             #DAYDSF1_E      0.000000   <=
..                   ...           ...  ...
704          V_OWF_NRB_0  10000.001000   <=
705  V_OWF_TGTSNRBHTN_30  10030.000000   <=
706        V_S_NIL_ROCOF    812.280029   <=
707          V_T_NIL_BL1    478.000000   <=
708        V_T_NIL_FCSPS    425.154024   <=

[574 rows x 3 columns]
Returns:

Columns:

Description:

set

the unique identifier of the constraint set,

(as str)

type

the direction of the constraint >=, <= or

=, (as str)

rhs

the right hand side value of the constraint,

(as np.float64)

Return type:

pd.DataFrame

get_rhs_and_type()

Get the rhs values and types for generic constraints.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = ConstraintData(inputs_loader)
>>> unit_data.get_rhs_and_type()
                     set           rhs type
0               #BANN1_E     32.000000   <=
1              #BNGSF2_E      3.000000   <=
2            #CROWLWF1_E     43.000000   <=
3             #CSPVPS1_E     29.000000   <=
4             #DAYDSF1_E      0.000000   <=
..                   ...           ...  ...
704          V_OWF_NRB_0  10000.001000   <=
705  V_OWF_TGTSNRBHTN_30  10030.000000   <=
706        V_S_NIL_ROCOF    812.280029   <=
707          V_T_NIL_BL1    478.000000   <=
708        V_T_NIL_FCSPS    425.154024   <=

[709 rows x 3 columns]
Returns:

Columns:

Description:

set

the unique identifier of the constraint set,

(as str)

type

the direction of the constraint >=, <= or

=, (as str)

rhs

the right hand side value of the constraint,

(as np.float64)

Return type:

pd.DataFrame

get_unit_lhs()

Get the lhs coefficients of units.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = ConstraintData(inputs_loader)
>>> unit_data.get_unit_lhs()
                      set      unit service  coefficient
0                #BANN1_E     BANN1  energy          1.0
1               #BNGSF2_E    BNGSF2  energy          1.0
2             #CROWLWF1_E  CROWLWF1  energy          1.0
3              #CSPVPS1_E   CSPVPS1  energy          1.0
4              #DAYDSF1_E   DAYDSF1  energy          1.0
...                   ...       ...     ...          ...
5864      V_ARWF_FSTTRP_5     ARWF1  energy          1.0
5865      V_MTGBRAND_33WT  MTGELWF1  energy          1.0
5866     V_OAKHILL_TFB_42  OAKLAND1  energy          1.0
5867          V_OWF_NRB_0  OAKLAND1  energy          1.0
5868  V_OWF_TGTSNRBHTN_30  OAKLAND1  energy          1.0

[5869 rows x 4 columns]
Returns:

Columns:

Description:

set

the unique identifier of the constraint set

to map the lhs coefficients to, (as str)

unit

the unit whose variables will be mapped to

the lhs, (as str)

service

the service whose variables will be mapped to the lhs, (as str)

coefficient

the lhs coefficient (as np.float64)

Return type:

pd.DataFrame

get_interconnector_lhs()

Get the lhs coefficients of interconnectors.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = ConstraintData(inputs_loader)
>>> unit_data.get_interconnector_lhs()
                     set interconnector  coefficient
0               DATASNAP      N-Q-MNSP1          1.0
1        DATASNAP_DFS_LS      N-Q-MNSP1          1.0
2      DATASNAP_DFS_NCAN      N-Q-MNSP1          1.0
3    DATASNAP_DFS_NCWEST      N-Q-MNSP1          1.0
4      DATASNAP_DFS_NNTH      N-Q-MNSP1          1.0
..                   ...            ...          ...
631      V^^S_NIL_TBSE_1           V-SA          1.0
632      V^^S_NIL_TBSE_2           V-SA          1.0
633        V_S_NIL_ROCOF           V-SA          1.0
634          V_T_NIL_BL1      T-V-MNSP1         -1.0
635        V_T_NIL_FCSPS      T-V-MNSP1         -1.0

[636 rows x 3 columns]
Returns:

Columns:

Description:

set

the unique identifier of the constraint set

to map the lhs coefficients to, (as str)

interconnetor

the interconnetor whose variables will be

mapped to the lhs, (as str)

coefficient

the lhs coefficient (as np.float64)

Return type:

pd.DataFrame

get_region_lhs()

Get the lhs coefficients of regions.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = ConstraintData(inputs_loader)
>>> unit_data.get_region_lhs()
                   set region    service  coefficient
0        F_I+LREG_0120   NSW1  lower_reg          1.0
1        F_I+LREG_0120   QLD1  lower_reg          1.0
2        F_I+LREG_0120    SA1  lower_reg          1.0
3        F_I+LREG_0120   TAS1  lower_reg          1.0
4        F_I+LREG_0120   VIC1  lower_reg          1.0
..                 ...    ...        ...          ...
478   F_T+NIL_WF_TG_R5   TAS1  raise_reg          1.0
479   F_T+NIL_WF_TG_R6   TAS1   raise_6s          1.0
480  F_T+NIL_WF_TG_R60   TAS1  raise_60s          1.0
481      F_T+RREG_0050   TAS1  raise_reg          1.0
482    F_T_NIL_MINP_R6   TAS1   raise_6s          1.0

[483 rows x 4 columns]
Returns:

Columns:

Description:

set

the unique identifier of the constraint set

to map the lhs coefficients to, (as str)

region

the region whose variables will be mapped

to the lhs, (as str)

service

the service whose variables will be mapped

to the lhs, (as str)

coefficient

the lhs coefficient (as np.float64)

Return type:

pd.DataFrame

get_fcas_requirements()

Get constraint details needed for setting FCAS requirements.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = ConstraintData(inputs_loader)
>>> unit_data.get_fcas_requirements()
                   set    service region type       volume
0        F_I+LREG_0120  lower_reg   NSW1   >=   120.000000
1        F_I+LREG_0120  lower_reg   QLD1   >=   120.000000
2        F_I+LREG_0120  lower_reg    SA1   >=   120.000000
3        F_I+LREG_0120  lower_reg   TAS1   >=   120.000000
4        F_I+LREG_0120  lower_reg   VIC1   >=   120.000000
..                 ...        ...    ...  ...          ...
478   F_T+NIL_WF_TG_R5  raise_reg   TAS1   >=    62.899972
479   F_T+NIL_WF_TG_R6   raise_6s   TAS1   >=    67.073327
480  F_T+NIL_WF_TG_R60  raise_60s   TAS1   >=    83.841637
481      F_T+RREG_0050  raise_reg   TAS1   >= -9950.000000
482    F_T_NIL_MINP_R6   raise_6s   TAS1   >=    35.000000

[483 rows x 5 columns]
Returns:

Columns:

Description:

set

unique identifier of the requirement set,

(as str)

service

the service or services the requirement set

applies to (as str)

region

the regions that can contribute to meeting a

requirement, (as str)

volume

the amount of service required, in MW,

(as np.float64)

type

the direction of the constrain ‘=’, ‘>=’ or

’<=’, optional, a value of ‘=’ is assumed if

the column is missing (as str)

Return type:

pd.DataFrame

get_violation_costs()

Get the violation costs for generic constraints.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = ConstraintData(inputs_loader)
>>> unit_data.get_violation_costs()
                     set       cost
0               #BANN1_E  5220000.0
1              #BNGSF2_E  5220000.0
2            #CROWLWF1_E  5220000.0
3             #CSPVPS1_E  5220000.0
4             #DAYDSF1_E  5220000.0
..                   ...        ...
704          V_OWF_NRB_0  5220000.0
705  V_OWF_TGTSNRBHTN_30  5220000.0
706        V_S_NIL_ROCOF   507500.0
707          V_T_NIL_BL1  5220000.0
708        V_T_NIL_FCSPS   435000.0

[709 rows x 2 columns]
Returns:

Columns:

Description:

set

the unique identifier of the constraint set

to map the lhs coefficients to, (as str)

cost

the cost to the objective function of

violating the constraint, (as np.float64)

Return type:

pd.DataFrame

get_constraint_violation_prices()

Get the violation costs of non-generic constraint groups.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = ConstraintData(inputs_loader)
>>> unit_data.get_constraint_violation_prices()
{'regional_demand': 2175000.0, 'interocnnector': 16675000.0, 'generic_constraint': 435000.0, 'ramp_rate': 16747500.0, 'unit_capacity': 5365000.0, 'energy_offer': 16457500.0, 'fcas_profile': 2247500.0, 'fcas_max_avail': 2247500.0, 'fcas_enablement_min': 1015000.0, 'fcas_enablement_max': 1015000.0, 'fast_start': 16385000.0, 'mnsp_ramp_rate': 16747500.0, 'msnp_offer': 16457500.0, 'mnsp_capacity': 5292500.0, 'uigf': 5582500.0, 'voll': 14500.0, 'tiebreak': 1e-06}
Return type:

dict

is_over_constrained_dispatch_rerun()

Get a boolean indicating if the over constrained dispatch rerun process was used for this interval.

Examples

>>> inputs_loader = _test_setup()
>>> unit_data = ConstraintData(inputs_loader)
>>> unit_data.is_over_constrained_dispatch_rerun()
False
Return type:

bool

RHSCalc

Classes:

RHSCalc(xml_cache_manager)

Engine for calculating generic constraint right hand side (RHS) values from scratch based on the equations provided in the NEMDE xml input files.

class nempy.historical_inputs.rhs_calculator.RHSCalc(xml_cache_manager)

Engine for calculating generic constraint right hand side (RHS) values from scratch based on the equations provided in the NEMDE xml input files.

AEMO publishes the RHS values used in dispatch, however, those values are dynamically calculated by NEMDE and depend on inputs such as transmission line flows, generator on statuses, and generator output levels. This class allows the user to update the input values which the RHS equations depend on and then recalulate RHS values. The primary reason for implementing this functionality is to allow the Bass link switch run to be implemented using Nempy, which requires that the RHS values of a number of constraints to be recalculated for the case where the bass link frequency controller is not active.

The methodology for the calculation is based on the description in the Constraint Implementation Guidelines published by AEMO, see AEMO doc. The main limitation of the method implemented is that it does not allow for the calculation of constraints that use BRANCH operation. In 2013 there were three constraints using the branching operation (V^SML_NIL_3, V^SML_NSWRB_2, V^S_HYCP, Q^NIL_GC), and in 2023 it appears the branch operation is no longer in active use. While there are some difference between the RHS values produced, generally they are small,

Examples

>>> xml_cache_manager = xml_cache.XMLCacheManager('test_nemde_cache')
>>> xml_cache_manager.load_interval('2019/01/01 00:00:00')
>>> rhs_calculator = RHSCalc(xml_cache_manager)
Parameters:

xml_cache_manager (instance of nempy class XMLCacheManager) –

Methods:

get_nemde_rhs(constraint_id)

Get the RHS values of a constraints as calculated by NEMDE.

compute_constraint_rhs(constraint_id)

Calculates the rhs values of the speficied constraint or list of constraints.

get_rhs_constraint_equations_that_depend_value(...)

A helper method used to find the which constraints' RHS depend on a given input value.

update_spd_id_value(spd_id, type, value)

Updates the value of one of the inputs which the RHS constraint equations depend on.

get_nemde_rhs(constraint_id)

Get the RHS values of a constraints as calculated by NEMDE. This method is implemented primarily to assist with testing.

Examples

>>> xml_cache_manager = xml_cache.XMLCacheManager('test_nemde_cache')
>>> xml_cache_manager.load_interval('2019/01/01 00:00:00')
>>> rhs_calculator = RHSCalc(xml_cache_manager)
>>> rhs_calculator.get_nemde_rhs("F_MAIN++NIL_BL_R60")
-10290.279635
Parameters:

constraint_id (str which is the unique ID of the constraint) –

Return type:

float

compute_constraint_rhs(constraint_id)

Calculates the rhs values of the speficied constraint or list of constraints.

Examples

>>> xml_cache_manager = xml_cache.XMLCacheManager('test_nemde_cache')
>>> xml_cache_manager.load_interval('2019/01/01 00:00:00')
>>> rhs_calculator = RHSCalc(xml_cache_manager)
>>> rhs_calculator.compute_constraint_rhs('F_MAIN++NIL_BL_R60')
-10290.737541856766
>>> rhs_calculator.compute_constraint_rhs(['F_MAIN++NIL_BL_R60', 'F_MAIN++NIL_BL_R6'])
                  set           rhs
0  F_MAIN++NIL_BL_R60 -10290.737542
1   F_MAIN++NIL_BL_R6 -10581.475084
Parameters:

constraint_id (str or list[str] which is the unique ID of the constraint or a list of the strings which are) – the constraint IDs

Return type:

float or pandas DataFrame

get_rhs_constraint_equations_that_depend_value(spd_id, type)

A helper method used to find the which constraints’ RHS depend on a given input value.

Examples

>>> xml_cache_manager = xml_cache.XMLCacheManager('nemde_cache_2014_12')
>>> xml_cache_manager.load_interval('2014/12/05 00:00:00')
>>> rhs_calculator = RHSCalc(xml_cache_manager)
>>> rhs_calculator.get_rhs_constraint_equations_that_depend_value('BL_FREQ_ONSTATUS', 'W')
['F_MAIN++APD_TL_L5', 'F_MAIN++APD_TL_L6', 'F_MAIN++APD_TL_L60', 'F_MAIN++ML_L5_0400', 'F_MAIN++ML_L5_APD', 'F_MAIN++ML_L60_0400', 'F_MAIN++ML_L60_APD', 'F_MAIN++ML_L6_0400', 'F_MAIN++ML_L6_APD', 'F_MAIN++NIL_DYN_LREG', 'F_MAIN++NIL_DYN_RREG', 'F_MAIN++NIL_MG_R5', 'F_MAIN++NIL_MG_R6', 'F_MAIN++NIL_MG_R60', 'F_MAIN+APD_TL_L5', 'F_MAIN+APD_TL_L6', 'F_MAIN+APD_TL_L60', 'F_MAIN+ML_L5_0400', 'F_MAIN+ML_L5_APD', 'F_MAIN+ML_L60_0400', 'F_MAIN+ML_L60_APD', 'F_MAIN+ML_L6_0400', 'F_MAIN+ML_L6_APD', 'F_MAIN+NIL_DYN_LREG', 'F_MAIN+NIL_DYN_RREG', 'F_MAIN+NIL_MG_R5', 'F_MAIN+NIL_MG_R6', 'F_MAIN+NIL_MG_R60', 'F_T++LREG_0050', 'F_T++NIL_BB_TG_R5', 'F_T++NIL_BB_TG_R6', 'F_T++NIL_BB_TG_R60', 'F_T++NIL_MG_R5', 'F_T++NIL_MG_R6', 'F_T++NIL_MG_R60', 'F_T++NIL_ML_L5', 'F_T++NIL_ML_L6', 'F_T++NIL_ML_L60', 'F_T++NIL_TL_L5', 'F_T++NIL_TL_L6', 'F_T++NIL_TL_L60', 'F_T++NIL_WF_TG_R5', 'F_T++NIL_WF_TG_R6', 'F_T++NIL_WF_TG_R60', 'F_T++RREG_0050', 'F_T+LREG_0050', 'F_T+NIL_BB_TG_R5', 'F_T+NIL_BB_TG_R6', 'F_T+NIL_BB_TG_R60', 'F_T+NIL_MG_R5', 'F_T+NIL_MG_R6', 'F_T+NIL_MG_R60', 'F_T+NIL_ML_L5', 'F_T+NIL_ML_L6', 'F_T+NIL_ML_L60', 'F_T+NIL_TL_L5', 'F_T+NIL_TL_L6', 'F_T+NIL_TL_L60', 'F_T+NIL_WF_TG_R5', 'F_T+NIL_WF_TG_R6', 'F_T+NIL_WF_TG_R60', 'F_T+RREG_0050', 'T_V_NIL_BL1', 'V_T_NIL_BL1']
Parameters:
  • spd_id (str, the ID of the value used in the NEMDE xml input file.) –

  • type (str, the type of the value used in the NEMDE xml input file. See the Constraint Implementation Guidelines) – published by AEMO for more information on SPD types, see AEMO doc

Return type:

list[str] a list of strings detailing the constraits’ whose RHS equations depend on the specified value.

update_spd_id_value(spd_id, type, value)

Updates the value of one of the inputs which the RHS constraint equations depend on.

Examples

>>> xml_cache_manager = xml_cache.XMLCacheManager('test_nemde_cache')
>>> xml_cache_manager.load_interval('2019/01/01 00:00:00')
>>> rhs_calculator = RHSCalc(xml_cache_manager)
>>> rhs_calculator.update_spd_id_value('220_GEN_INERTIA', 'A', '100.0')
Parameters:
  • spd_id (str, the ID of the value used in the NEMDE xml input file.) –

  • type (str, the type of the value used in the NEMDE xml input file. See the Constraint Implementation Guidelines) – published by AEMO for more information on SPD types, see AEMO doc

  • value (str (detailing a float number) the new value to set the input to.) –