[ieee 2012 23rd international workshop on database and expert systems applications (dexa) - vienna,...

5
Adaptive Multiagent Model Based on Reinforcement Learning for Distributed Generation Systems Daniel Divényi, András Dán Department of Electric Power Engineering Budapest University of Technology and Economics Budapest, Hungary e-mail: [email protected] AbstractDistributed generation have been widely spread in the last decades raising a lot of questions regarding the safe and high-quality operation of the power systems. The investigation of these questions requires a proper model considering the different technical, economical and legal aspects. Our research is aimed at develop a multiagent system where rational agents control each distributed generation unit. Based on intelligent agent-program the agents are able to optimize their operations taking several viewpoints into account: fulfilling the contractual obligations, considering the technical constraints and maximizing the realized profit in a continuously varying market environment. This paper describes a simple reinforcement learning method that results in an adaptive agent-program. The agents are informed about their realized profits and they apply this information to evaluate their former decisions and adjust the parameters of own agent-program. The verification of the model proved that the developed agent- program provides acceptable results compared to the real productions. Keywords-component; distributed generation, multiagent modeling, state-based method, strategies, reinforcement learning I. INTRODUCTION During the last few decades a lot of distributed generation (DG) units have been installed in the power systems due to the large development of renewable and cogeneration technologies and to the global pressure to increase efficiency and energy-saving. Therefore the degree of DG systems has achieved significant level in most European countries. However the large number of different units with relative small built-in capacity influences the normal operation of power systems and several questions have been raised regarding the quality and safety operation of power service. Over these problems the development of the technologies turned the researchers’ attention to the smart grids, a totally new architecture of the present power systems. Although the detailed conceptions of smart grids are not settled, the main object is to form smaller, self-supporting grids applying DG units. Analyzing the effects of the wide-spread distributed generation on the power systems, or the investigations of the behavior of the generating side of smart grids require a proper model considering several aspects of these complex systems. Our research is aimed at developing a multiagent system to model the cooperation of different DG units. The section II describes the considered aspects implemented in our model. Section III provides a short description about the agent technology based on [1], and it shortly presents the realized multiagent system (MAS). Detailed description of this section was published in [7]. The main contribution of this paper is the section IV describing the developed utility- based agent-program and its reinforcement learning algorithms. Finally some verification results are detailed in section V proving the ability of the presented method. II. CONSIDERED ASPECTS OF DISTRIBUTED GENERATION The proper modeling of distributed generation units requires the consideration of several aspects. The implemented MAS provides a well-arranged framework that makes possible to easily add or remove different viewpoints. The following issues are implemented. A. Renewable and cogeneration technologies The MAS is able to model DG units applying either renewable or cogenerating technologies. Regarding the cogeneration several different technologies are available in the power systems: gas engines (probably with biogas fuel), gas turbines, combined-cycle gas turbines, or the combinations of some boilers and steam-turbines. Although there are essential differences in the technical specifications of the units listed above (regarding efficiency, ramping, start- up time, the temperature and pressure of supplied heat, etc.), all of them should be handled according to the technical prescriptions. Regarding renewable energy resources the model is also able to handle wind-turbines with different characteristics and rated power. Although the water and solar resources are currently not implemented this does not require essential developments. (Only the degree of installed wind generation is significant in Hungary.) All generation units need regular services after determined operating time and sometimes not planned outages can occur. B. Weather dependencies Operation of cogeneration units are mainly influenced by the demanded heat that depends on the temperature (or on 2012 23rd International Workshop on Database and Expert Sytems Applications 1529-4188/12 $26.00 © 2012 IEEE DOI 10.1109/DEXA.2012.31 303

Upload: andrs

Post on 12-Dec-2016

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE 2012 23rd International Workshop on Database and Expert Systems Applications (DEXA) - Vienna, Austria (2012.09.3-2012.09.7)] 2012 23rd International Workshop on Database and

Adaptive Multiagent Model Based on Reinforcement Learning for Distributed Generation Systems

Daniel Divényi, András DánDepartment of Electric Power Engineering

Budapest University of Technology and Economics Budapest, Hungary

e-mail: [email protected]

Abstract—Distributed generation have been widely spread in the last decades raising a lot of questions regarding the safe and high-quality operation of the power systems. The investigation of these questions requires a proper model considering the different technical, economical and legal aspects. Our research is aimed at develop a multiagent system where rational agents control each distributed generation unit. Based on intelligent agent-program the agents are able to optimize their operations taking several viewpoints into account: fulfilling the contractual obligations, considering the technical constraints and maximizing the realized profit in a continuously varying market environment. This paper describes a simple reinforcement learning method that results in an adaptive agent-program. The agents are informed about their realized profits and they apply this information to evaluate their former decisions and adjust the parameters of own agent-program. The verification of the model proved that the developed agent-program provides acceptable results compared to the real productions.

Keywords-component; distributed generation, multiagent modeling, state-based method, strategies, reinforcement learning

I. INTRODUCTION

During the last few decades a lot of distributed generation (DG) units have been installed in the power systems due to the large development of renewable and cogeneration technologies and to the global pressure to increase efficiency and energy-saving. Therefore the degree of DG systems has achieved significant level in most European countries. However the large number of different units with relative small built-in capacity influences the normal operation of power systems and several questions have been raised regarding the quality and safety operation of power service.

Over these problems the development of the technologies turned the researchers’ attention to the smart grids, a totally new architecture of the present power systems. Although the detailed conceptions of smart grids are not settled, the main object is to form smaller, self-supporting grids applying DG units.

Analyzing the effects of the wide-spread distributed generation on the power systems, or the investigations of the behavior of the generating side of smart grids require a

proper model considering several aspects of these complex systems.

Our research is aimed at developing a multiagent system to model the cooperation of different DG units. The section II describes the considered aspects implemented in our model. Section III provides a short description about the agent technology based on [1], and it shortly presents the realized multiagent system (MAS). Detailed description of this section was published in [7]. The main contribution of this paper is the section IV describing the developed utility-based agent-program and its reinforcement learning algorithms. Finally some verification results are detailed in section V proving the ability of the presented method.

II. CONSIDERED ASPECTS OF DISTRIBUTED GENERATION

The proper modeling of distributed generation units requires the consideration of several aspects. The implemented MAS provides a well-arranged framework that makes possible to easily add or remove different viewpoints. The following issues are implemented.

A. Renewable and cogeneration technologies The MAS is able to model DG units applying either

renewable or cogenerating technologies. Regarding the cogeneration several different technologies are available in the power systems: gas engines (probably with biogas fuel), gas turbines, combined-cycle gas turbines, or the combinations of some boilers and steam-turbines. Although there are essential differences in the technical specifications of the units listed above (regarding efficiency, ramping, start-up time, the temperature and pressure of supplied heat, etc.), all of them should be handled according to the technical prescriptions.

Regarding renewable energy resources the model is also able to handle wind-turbines with different characteristics and rated power. Although the water and solar resources are currently not implemented this does not require essential developments. (Only the degree of installed wind generation is significant in Hungary.)

All generation units need regular services after determined operating time and sometimes not planned outages can occur.

B. Weather dependencies Operation of cogeneration units are mainly influenced by

the demanded heat that depends on the temperature (or on

2012 23rd International Workshop on Database and Expert Sytems Applications

1529-4188/12 $26.00 © 2012 IEEE

DOI 10.1109/DEXA.2012.31

303

Page 2: [IEEE 2012 23rd International Workshop on Database and Expert Systems Applications (DEXA) - Vienna, Austria (2012.09.3-2012.09.7)] 2012 23rd International Workshop on Database and

the current day-time in case of industrial heating service). Furthermore the capacity of the heating-system is not negligible and also has to be taken into account.

In case of wind turbines the variations of wind-speed have even stronger influence on the operation.

C. Market environment Besides the technical specifications and the weather

dependencies, both are physical constraints that must be complied, all DG units operate in market environment and try to maximize the profit.

The calculation of the realized profit is hard because the necessary data (prices) are rarely published. Depending on the status of the current DG resource in the Hungarian electricity market (obligatory off-take / cogeneration balance-group / liberalized power market) the electric prices might be officially determined, or may be indexed to the Hungarian Power Exchange, or in some cases they can be estimated only. The costs and expenses also have to be estimated.

D. Legal environment Finally the legal environment and the contracts have

influences on the behaviors of DG units. Getting into the balance-group of obligatory off-take

requires fulfilling some efficiency criteria. Furthermore the heat-service contracts obliged cogeneration unit to keep some technical constraints, and finally the fuel consumption and electricity production shall not be deviate from the value declared on the previous day. If the units violate these limits they to pay fine.

Summarizing the above listed constraints and viewpoints there are a lot of aspects that the modeled DG units have to take into account. All of the above detailed considerations have been implemented in our model in a clearly understandable framework that is detailed in [6]. However the next section provides a short outline about this framework.

III. THE MULTIAGENT MODEL

A. General description Two types of agent were implemented: DG agent and

DG Concentrator agent (DGC agent). One DG agent controls one DG resource, namely all generation units installed in the relevant resource. (For example if a DG resource consists of three gas-engines then one DG agent handles all these units.) The model of the whole Hungarian DG system requires 200-300 DG agents. However the paper is focusing on presentation of the agent-program that is easier to demonstrate on one agent.

The interactions between DG agents are not allowed, all DG agents communicate only with the DGC agent. DGC is responsible for the dispatch and control the whole DG system hence providing a virtual power plant for the transmission system operator (TSO).

B. Task environment: PEAS Planning The concept of the implemented MAS is introduced in

this section based on [1]. The Performance, Environment, Actions and Sensors are

shortly defined as follows: − Performance measure: the objective value to appreciate

the success or “goodness” of agent behavior. − Environment: the agents live in the environment

influencing them to make actions. − Actions determine what the agent is able to do by its

effectors. − Sensors determine what the agent is able to perceive

about the environment. Summarizing the concepts listed above, the agents live in

the Environment and perceive its changes applying the sensors. Depending on the former and actual perceptions agents use their effectors to change the environment or their inner state. The performance measure is an objective index to evaluate the behavior of the agents. All agents try to maximize the performance measure.

1) Performance measure The model applies a utility-based agent-program where

the utility determines the performance measure. Each possible action of agent is rated by a utility value and the agent chooses the one with the maximum utility. In section IVa short outline about utility calculation is presented. More detailed description is published in [6].

2) Environment The environment contains the variables that are

independent from the agents. In our model the environment consists of the simulation of the following signals:

� Temperature: stochastic simulator is implemented based on historical data [8].

� Heat-demand: based on the temperature data and the usual heat-system planning regulations [9] in case of district heating system. In case of industrial heat-service it depends on the current part of the day.

� Wind-speed: based on statistical simulation and AR models [10]

� Prices: electric energy prices and heat energy prices are estimated or determined by official prices depending on the current status of relevant DG agent in the Hungarian electricity market [11-144].

3) Effectors DG agent has one effector to determine the operational

state choosing from the different available ones. Whenever the agent has to use its effector (schedule planning, outage occur, DGC dispatch) the agent-program calculates utility of each operational state and the best is chosen.

The available operational states are determined once, when the agent is instanced, depending on the applied technologies. However the states have general parameters for all technologies (as electric energy, sold heat, wasted heat, consumed gas, controlling time, meteorological dependencies…), these general parameters provide enough information for the utility-calculation. Consequently the agent-program is independent from the technology applied in

304

Page 3: [IEEE 2012 23rd International Workshop on Database and Expert Systems Applications (DEXA) - Vienna, Austria (2012.09.3-2012.09.7)] 2012 23rd International Workshop on Database and

the DG resources. This is the main advantage of the developed state-based method detailed in [6]. Fig 1 illustrates the states of a DG agent possessing two gas-engines.

4) Sensors The sensors of DG agents enable the agent to obtain

information about the current time and uncertain forecast of temperature, heat-demand and wind-speed. Of course agents perceive the outages of their generation units.

C. Characteristics of the environment Based on the terminology of [1] the implemented task

environment has the following characteristics: � Partially observable: agent cannot observe the

outages and precise wind-speed values in the future. � Stochastic: outage and weather signal are stochastic. � Sequential: each day and 15 minutes depends on the

previous one. � Static: the environment cannot change while the

agent-program runs. � Discrete actions, continuous environment. � Multiagent: there can be more DG agents in the

environment if virtual power plant is investigated. This is a cooperative MAS: agents try to keep the aggregated scheduled production and help each other in case of outages.

IV. UTILITY-BASED AGENT-PROGRAM

A. Introduction Reference [1] provides a skeleton for utility-based agent-

program. Firstly the agent refreshes its knowledge about the environment based on the information of the actual percept. Secondly it estimates the expectable effects of their possible actions and then it determines the utilities. Finally the agent selects the most useful action to do. (Fig. 2.)

Figure 1. Available operational states of a DG having 2 gas-engines.

The developed agent-program follows the above mentioned steps. Let us take the example when the agent shall plan its day-ahead schedule! Firstly the DG agent obtains some information about the environment: heat-demand forecast (or wind-speed forecast), expectable prices, the current states of the generation units (out of operation, under maintenance, currently running), and some information about the current operation (e.g. the heat-system is a little bit overheated, or the storage heater is nearly empty…). Secondly it supposes each available operational state in operation and evaluates the expectable effects in several points of view: if the heat demand would be complied, how much fuel would be consumed, how much profit would be realized… Finally the agent selects the operational state seems to be the most useful.

Evaluating the actions has to be done from different viewpoints. The so-called strategies perform this separation; each strategy concerns only one aspect of the considered ones (section II) and calculates a partial utility value (PU) for each action. The total utility of an action is the weighted sum of partial utility values of all strategies for that action.

The agent-program has a special step when the unavailable actions are filtered out from the utility calculation process. For example an action (operational state) is unavailable if it requires all generation units to be in operation however one of them is currently under maintenance.

B. Strategies for utility calculation Five strategies were implemented to cover the different

aspects of DG unit operation management: � Electricity strategy (EL) adjusts the electric energy

production of DG agent. It stimulates agent not to deviate from schedule and to substitute a unit in case of outage.

� Heating service strategy (HS) evaluates the amount of supplied heat of DG agent. It prefers the heat energy to meet demanded value, however it considers the physical system and can suggest overheat before a short maintenance for example.

Figure 2. Agent-program of utility-based agents [1].

305

Page 4: [IEEE 2012 23rd International Workshop on Database and Expert Systems Applications (DEXA) - Vienna, Austria (2012.09.3-2012.09.7)] 2012 23rd International Workshop on Database and

� Storage heater strategy (SH) manages the optimal level of storage heater, if it is available. It tries to schedule the fill- and empty-periods of storage in the most economical way.

� Generation unit manager strategy (GUM) takes the technical specifications of each generation unit into consideration. It prefers continuous constant load for each unit and suggests avoiding the unnecessary ramps and starting cycles.

� Fuel consumption strategy (FC) prevents the agent from the significant deviation from the fuel-consumption declared in the previous day.

The exact equations of calculation partial utilities are published in [6]. However the EL, HS and SH strategies have been developed to be adaptive and they have a common algorithm for utility calculation and learning which is described in the following section.

There are other strategies (Meteorological, State selector, Profit strategies) not involved in utility calculation however they influence the resulting actions indirectly. State selector strategy controls the whole agent-program, while profit strategy has a great effect on the learning processes.

C. Framework for adaptive strategies The main contribution of this paper contrasted with [6] is

the presentation and verification of the adaptive strategies and algorithms.

1) Reinforcement learning The model implements a simplified reinforcement

learning algorithm. DG agents get feedback (“reward”) about their actions in the end of each day. This reward represents the realized daily profit for all strategies. Depending on the reward value the strategies evaluate the previously suggested actions and perhaps modify the parameters of the utility calculation algorithm. During the next running of the agent-program the DG agents use the obtained information to suggest better actions.

The algorithm of reinforcement learning has been implemented in a uniformed framework in order to be easily extendable in the future. The functioning of the algorithm is illustrated on the example of storage heater strategy.

2) How shall the utility of the different actions be calculated?

Based on former percepts the strategy determines a set-point that is expected to suggest optimal operation for the DG agent regarding the aspect considered by the strategy. In case of SH strategy the set-point describes the relative level of the storage. For example the set-point proposes DG agent to empty the storage at dawn and fill it anyway (Fig. 3). (Price of electric energy is usually low at dawn. The DG unit should stop for these hours and serves the heat from the storage.)

The partial utility of each action can be determined based on the deviation from the set-point. For example at 6 p.m. a state decreasing the storage level is not useful. It is better if it does not alter the storage level but the most useful state is that which fill the storage with the appropriate degree. The partial utility is determined by (1).

Figure 3. Proposed set-point for storage-heater strategy and an example of one realization.

PUheatstorage = – |Lsetpoint – Lstate|� ����Lsetpoint: Level of storage heater required by the set-point. Lstate: Calculated level of storage heater supposing the state in

operation.

Lsetpoint and Lstate shall be interpreted at the finish time of selectable state. Suppose a situation in which the agent-program shall select a state for the next hour at 6 p.m. Then Lsetpoint shall be determined at 7 p.m. (approx. 70%), and Lstateshall be interpreted as the supposed state would result at this time. If a state did not alter the level then Lstate at 7 p.m. would be equal with the realized level at 6 p.m. (approx. 65%). Hence the partial utility value would be 0.05 for this state.

Partial utilities have to be calculated for all available states and then they have to be normalized into the [−1;1] interval. After the projection it is easy to weight different viewpoints.

3) How shall the set-point be determined? Suitable determination of the set-point has great

importance. Applying wrong set-point results in a production of modeled DG totally different from the real one.

The set-point is determined based on profit tables. Sizes of tables are fixed; they have 24 columns for each hour and 21 rows for the discrete steps of set-point. In case of SH strategy the first row means the storage is full, the last row relates to the empty storage, between them the storage is 5-95% filled. Values of table mean the highest experienced daily profit using the relevant set-point (row) in the relevant hour (column). During the reinforcement in each hour (column) the profit value of the realized storage level (row) is refreshed if the profit (reward) is greater than the original value. Hence the values of the table are continuously refreshed after each day. Based on the row of maximum value in each column (hour) the set-point can be determined.

Fig. 4 presents the profit table of storage heater strategy; instead of analyzing the particular values the cells are colored using a red-green (minimum-maximum) scale. The set-point can be derived by matching the greenest cells; the borders of the relevant cells are thickened. The learnt set-point is different from the suggested one (Fig. 3) which can be explained by the low level of heat-demand in the morning.

306

Page 5: [IEEE 2012 23rd International Workshop on Database and Expert Systems Applications (DEXA) - Vienna, Austria (2012.09.3-2012.09.7)] 2012 23rd International Workshop on Database and

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

100 3334 1495 1495 2137 2137 3156 3156 3156 3156 2137 1864 1864 1864 2242 2290 2240 2284 2284 2259 2259 1189 1189 1189 1135

95 0 3334 10 0 29 29 1948 170 170 1864 2113 2113 2113 2290 2272 2272 2113 2259 2596 2596 3156 3156 3156 3156

90 1 3331 3156 1105 3156 1105 1105 1105 2262 2262 2262 2262 2262 2262 3331 3331 3331 3331 3331 3334 3334 3334 3334 3334

85 78 1292 209 3156 240 209 348 348 348 348 348 348 2290 348 3334 3334 3334 3334 3334 2272 1105 1077 1077 2134

80 2 2293 3334 2272 348 348 246 245 2113 948 2137 1266 2272 3331 1189 2596 2272 2272 2272 2309 2293 2272 2272 2272

75 2305 2309 3331 1292 1292 1292 2056 2056 2056 1266 798 2056 2056 3334 2284 3285 1251 1691 1813 900 2309 1852 1852 792

70 0 2219 220 1 197 1266 44 47 70 2056 2056 240 2145 1480 2596 2309 1691 559 2279 2262 2303 2262 2262 2092

65 45 2745 2092 2092 2092 2092 1512 1875 249 2259 2259 2259 3156 2262 2262 2262 2277 2745 3331 2277 2277 2293 2277 2277

60 0 73 2293 3334 2272 2103 2103 2262 2259 3156 2235 2235 3334 2596 2235 2277 2235 2235 2235 2284 1346 2262 1236 2103

55 346 45 2219 2219 2219 2219 255 1432 913 913 1052 2290 3331 2206 3285 1691 2021 1236 1236 995 2259 2309 2259 1813

50 0 77 2309 3331 751 751 2219 2219 2219 2219 2219 2262 964 2277 1714 1226 2290 2219 2219 1328 2092 2303 2262 2745

45 0 1480 1813 72 3334 66 1875 2113 2206 2235 2021 2219 2206 3285 2305 2305 2305 2305 2305 2305 2284 2272 2293 2293

40 0 2305 2303 2293 3331 1875 249 1512 2021 134 2596 3334 3285 2309 2210 2210 2210 2210 2210 2210 2305 2290 2309 2309

35 0 308 2745 2309 2206 2206 913 2259 477 1159 3156 3331 2303 1840 2193 2219 2193 3331 2193 2193 2193 2305 2305 319

30 0 2234 2279 2303 2168 3334 2262 2206 2168 2596 2168 1480 1512 1311 1680 1813 2745 2145 1227 898 898 898 1680 898

25 5 4 276 2272 3331 2303 2303 2303 2303 2303 2303 3285 1714 1566 2219 1866 898 898 2145 2279 2145 2242 2303 2303

20 0 346 2305 74 2309 2596 255 1159 255 1254 3334 2309 255 1680 1566 2745 100 2242 319 1227 1227 1227 2272 2272

15 4 4 308 2745 2290 3331 2272 2272 2272 1680 3331 1887 1680 2219 2745 1566 678 523 1923 900 900 900 900 900

10 2 3 2234 2305 2745 2745 3334 3334 3334 3334 3285 2745 2745 2745 1866 449 1566 1566 1566 1566 1566 1566 1566 1023

5 1 2 346 2234 2305 3331 1257 2279 1257 3331 2305 2219 2219 720 465 465 465 465 2242 2242 2279 2279 2279 301

0 0 0 0 0 2279 2309 3331 3331 3331 3331 3331 3331 3331 3331 3331 3331 3331 2305 2305 2305 2305 2305 2305 2305

Figure 4. Profit table of storage heater strategy.

DG agents often run fictive days to make the set-point more accurate. During this process the agent determines set-point to test based on the profit table. These learning set-points help on testing the cells of profit table that are never or rarely tested furthermore they prevent the set-point from being stuck.

V. VERIFICATION

The validation of the model and the learning process is quite complex. Each DG unit differs from the other: different technologies, different heating-service. Therefore all DG units shall be verified alone. Because of limited extent of this paper it provides the verification of only one DG agent applying cogeneration and taking part in district heating service (Fig. 5). The figure compares the production of real and modeled DG in the last two weeks of October 2011.

However the required parameters are not fully available they can be adjusted by comparing the production of the modeled and real DG unit. In such a complex problem the total similarity cannot be expected. The heating period started a day earlier than in the real measure. The model plans the most economical operation from day to day contrasted with the real practice where the habits and former operation have great influence. Nevertheless the main characteristics of the productions are similar enough to accept the verification: the daily produced energy, the periodof days when production is decreased is near the same.

VI. CONCLUSION

A multiagent model has been developed to simulate the behavior of different DG resources. The agent-program of DG units is independent of the applied technology and is based on adaptive, profit-oriented algorithm. The considered aspects can be easily extended. This paper describes the adaptive agent-program which has been verified by comparing the modeled and real production of a selected DG. Accomplished the full verification the model is proved to be suitable for different investigations regarding DG systems and smart grids.

Figure 5. Verification.

ACKNOWLEDGMENT

The authors appreciate the support of TÁMOP-4.2.1/B-09/11/KMR-2010-0002. This work was also supported in part by the MAVIR Hungarian TSO Company Ltd.

REFERENCES

[1] S. Russel and P. Norvig, Articial Intelligence: A Modern Approach (Third Edition).: Prentice Hall, 2011. p. 66-89.

[2] S. D. J. McArthur et al., "Multi-Agent Systems for Power Engineering Applications - Part I. Concepts, Approaches, and Technical Challenges," IEEE Transactions on Power Systems, vol. 22, no. 4, pp. 1743-1752, Nov. 2007.

[3] A.L. Dimeas and N.D. Hatziargyriou, "Operation of a Multiagent System for Microgrid Control," IEEE Transactions on Power Systems, vol. 20, no. 3, pp. 1447-1455, Aug. 2005.

[4] M. Pipattanasomporn, H. Feroze, S.Rahman, “Multi-Agent Systems in a Distributed Smart Grid: Design and Implementation,” presented at Power Systems Conference and Exposition (PSCE ’09), Seattle, 2009.

[5] M.P.F. Hommelberg, C.J. Warmer, I.G. Kamphuis, J.K. Kok, G.J. Schaeffer: “Distributed Control Concepts using Multi-Agent technology and Automatic Markets: An indispensable feature of smart power grids” presented at IEEE PES General Meeting, 2007

[6] D. Divényi, A. Dán: “Agent-based Modeling of Distributed Generation in Power System Control,” IEEE Transactions on Sustainable Energy, unpublished.

[7] D. Divényi, A. Dán: “Simulation Results of Cogeneration Units as System Reserve Power Source Using Multiagent Modeling”, in 16thInternational Conference on Intelligent System Applications to Power Systems (ISAP 2011), Hersonissos, 2011, Paper 97.

[8] Hungarian Meteorological Service (OMSZ). (2009, June) Climate Data Series. [Online]. http://www.met.hu/eghajlat/eghajlati_adatsorok

[9] A. Zsebik, "District-Heating Service," Budapest University of Technology and Economics, Budapest, Teaching aid (Hungarian) 2004.

[10] D. Divényi, J. Divényi: “Wind Speed Simulator Based on Wind Generation Using Autoregressive Statistical Model”, Electrotechnics Electronics Automatics (EEA), vol. 60, no. 2, 2012. in press

[11] (2011, March) Authorizations of Distributed Generations. [Online]. http://eh.gov.hu

[12] Hungarian Power Exchange (HUPX). (2012, March) Historical Data. [Online]. http://www.hupx.hu/market/historical_data/spot.html

[13] Parliament of Hungary, Hungarian Law LXXXVI about Electricity, 2007

[14] Ministry of National Development 50/2011 order about prices of district heating services.

307