Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. Artificial Intelligence
  3. Problem with input shape of Conv1d in tfagents sequential network

Problem with input shape of Conv1d in tfagents sequential network

Scheduled Pinned Locked Moved Artificial Intelligence
pythonsysadminagentic-aidata-structuresjson
3 Posts 3 Posters 7 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • A Offline
    A Offline
    ashish bhong
    wrote on last edited by
    #1

    Quote:

    I have created a trading environment using tfagent env = TradingEnv(df=df.head(100000), lkb=1000) tf_env = tf_py_environment.TFPyEnvironment(env) and passed a df of 100000 rows from which only closing prices are used which a numpy array of 100000 stock price time series data df: Date Open High Low Close volume 0 2015-02-02 09:15:00+05:30 586.60 589.70 584.85 584.95 171419 1 2015-02-02 09:20:00+05:30 584.95 585.30 581.25 582.30 59338 2 2015-02-02 09:25:00+05:30 582.30 585.05 581.70 581.70 52299 3 2015-02-02 09:30:00+05:30 581.70 583.25 581.70 582.60 44143 4 2015-02-02 09:35:00+05:30 582.75 584.00 582.75 582.90 42731 ... ... ... ... ... ... ... 99995 2020-07-06 11:40:00+05:30 106.85 106.90 106.55 106.70 735032 99996 2020-07-06 11:45:00+05:30 106.80 107.30 106.70 107.25 1751810 99997 2020-07-06 11:50:00+05:30 107.30 107.50 107.10 107.35 1608952 99998 2020-07-06 11:55:00+05:30 107.35 107.45 107.10 107.20 959097 99999 2020-07-06 12:00:00+05:30 107.20 107.35 107.10 107.20 865438 at each step the agent has access to previous 1000 prices + current price of stock = 1001 and it can take 3 possible action from 0,1,2 then i wrapped it in TFPyEnvironment to convert it to tf_environment the prices that the agent can observe is a 1d numpy array prices = [584.95 582.3 581.7 ... 107.35 107.2 107.2 ] TimeStep Specs TimeStep Specs: TimeStep( {'discount': BoundedTensorSpec(shape=(), dtype=tf.float32, name='discount', minimum=array(0., dtype=float32), maximum=array(1., dtype=float32)), 'observation': BoundedTensorSpec(shape=(1001,), dtype=tf.float32, name='_observation', minimum=array(0., dtype=float32), maximum=array(3.4028235e+38, dtype=float32)), 'reward': TensorSpec(shape=(), dtype=tf.float32, name='reward'), 'step_type': TensorSpec(shape=(), dtype=tf.int32, name='step_type')}) Action Specs: BoundedTensorSpec(shape=(), dtype=tf.int32, name='_action', minimum=array(0, dtype=int32), maximum=array(2, dtype=int32)) then i build a dqn agent but i want to build it with a Conv1d layer my network consist of Conv1D, MaxPool1D, Conv1D, MaxPool1D, Dense_64, Dense_32 , q_value_layer i created a list layers using tf.keras.layers api and stored it in dense_layers list and created a Sequential Network DQN_Agent `learning_rate = 1e-3 action_tensor_spec = tensor_spec.from_spec(tf_env.action_spec()) num_actions = action_tensor_spec.maximum - action_tensor_spec.minimum + 1 dense_layers = [] dense_layers.append(tf.keras.layers.Conv1D( 64

    M M 2 Replies Last reply
    0
    • A ashish bhong

      Quote:

      I have created a trading environment using tfagent env = TradingEnv(df=df.head(100000), lkb=1000) tf_env = tf_py_environment.TFPyEnvironment(env) and passed a df of 100000 rows from which only closing prices are used which a numpy array of 100000 stock price time series data df: Date Open High Low Close volume 0 2015-02-02 09:15:00+05:30 586.60 589.70 584.85 584.95 171419 1 2015-02-02 09:20:00+05:30 584.95 585.30 581.25 582.30 59338 2 2015-02-02 09:25:00+05:30 582.30 585.05 581.70 581.70 52299 3 2015-02-02 09:30:00+05:30 581.70 583.25 581.70 582.60 44143 4 2015-02-02 09:35:00+05:30 582.75 584.00 582.75 582.90 42731 ... ... ... ... ... ... ... 99995 2020-07-06 11:40:00+05:30 106.85 106.90 106.55 106.70 735032 99996 2020-07-06 11:45:00+05:30 106.80 107.30 106.70 107.25 1751810 99997 2020-07-06 11:50:00+05:30 107.30 107.50 107.10 107.35 1608952 99998 2020-07-06 11:55:00+05:30 107.35 107.45 107.10 107.20 959097 99999 2020-07-06 12:00:00+05:30 107.20 107.35 107.10 107.20 865438 at each step the agent has access to previous 1000 prices + current price of stock = 1001 and it can take 3 possible action from 0,1,2 then i wrapped it in TFPyEnvironment to convert it to tf_environment the prices that the agent can observe is a 1d numpy array prices = [584.95 582.3 581.7 ... 107.35 107.2 107.2 ] TimeStep Specs TimeStep Specs: TimeStep( {'discount': BoundedTensorSpec(shape=(), dtype=tf.float32, name='discount', minimum=array(0., dtype=float32), maximum=array(1., dtype=float32)), 'observation': BoundedTensorSpec(shape=(1001,), dtype=tf.float32, name='_observation', minimum=array(0., dtype=float32), maximum=array(3.4028235e+38, dtype=float32)), 'reward': TensorSpec(shape=(), dtype=tf.float32, name='reward'), 'step_type': TensorSpec(shape=(), dtype=tf.int32, name='step_type')}) Action Specs: BoundedTensorSpec(shape=(), dtype=tf.int32, name='_action', minimum=array(0, dtype=int32), maximum=array(2, dtype=int32)) then i build a dqn agent but i want to build it with a Conv1d layer my network consist of Conv1D, MaxPool1D, Conv1D, MaxPool1D, Dense_64, Dense_32 , q_value_layer i created a list layers using tf.keras.layers api and stored it in dense_layers list and created a Sequential Network DQN_Agent `learning_rate = 1e-3 action_tensor_spec = tensor_spec.from_spec(tf_env.action_spec()) num_actions = action_tensor_spec.maximum - action_tensor_spec.minimum + 1 dense_layers = [] dense_layers.append(tf.keras.layers.Conv1D( 64

      M Offline
      M Offline
      Muneer Ahmad1
      wrote on last edited by
      #2

      I know it has something to do with the input shape of first layer of cov1d but cant figure out what am doing wrong at each time_step the agent is reciveing a observation of prices of 1d array of length 1001 then the input shape of conv1d should be (1,1001) but its wrong and i don't know how to solve this error.

      website development dubai

      1 Reply Last reply
      0
      • A ashish bhong

        Quote:

        I have created a trading environment using tfagent env = TradingEnv(df=df.head(100000), lkb=1000) tf_env = tf_py_environment.TFPyEnvironment(env) and passed a df of 100000 rows from which only closing prices are used which a numpy array of 100000 stock price time series data df: Date Open High Low Close volume 0 2015-02-02 09:15:00+05:30 586.60 589.70 584.85 584.95 171419 1 2015-02-02 09:20:00+05:30 584.95 585.30 581.25 582.30 59338 2 2015-02-02 09:25:00+05:30 582.30 585.05 581.70 581.70 52299 3 2015-02-02 09:30:00+05:30 581.70 583.25 581.70 582.60 44143 4 2015-02-02 09:35:00+05:30 582.75 584.00 582.75 582.90 42731 ... ... ... ... ... ... ... 99995 2020-07-06 11:40:00+05:30 106.85 106.90 106.55 106.70 735032 99996 2020-07-06 11:45:00+05:30 106.80 107.30 106.70 107.25 1751810 99997 2020-07-06 11:50:00+05:30 107.30 107.50 107.10 107.35 1608952 99998 2020-07-06 11:55:00+05:30 107.35 107.45 107.10 107.20 959097 99999 2020-07-06 12:00:00+05:30 107.20 107.35 107.10 107.20 865438 at each step the agent has access to previous 1000 prices + current price of stock = 1001 and it can take 3 possible action from 0,1,2 then i wrapped it in TFPyEnvironment to convert it to tf_environment the prices that the agent can observe is a 1d numpy array prices = [584.95 582.3 581.7 ... 107.35 107.2 107.2 ] TimeStep Specs TimeStep Specs: TimeStep( {'discount': BoundedTensorSpec(shape=(), dtype=tf.float32, name='discount', minimum=array(0., dtype=float32), maximum=array(1., dtype=float32)), 'observation': BoundedTensorSpec(shape=(1001,), dtype=tf.float32, name='_observation', minimum=array(0., dtype=float32), maximum=array(3.4028235e+38, dtype=float32)), 'reward': TensorSpec(shape=(), dtype=tf.float32, name='reward'), 'step_type': TensorSpec(shape=(), dtype=tf.int32, name='step_type')}) Action Specs: BoundedTensorSpec(shape=(), dtype=tf.int32, name='_action', minimum=array(0, dtype=int32), maximum=array(2, dtype=int32)) then i build a dqn agent but i want to build it with a Conv1d layer my network consist of Conv1D, MaxPool1D, Conv1D, MaxPool1D, Dense_64, Dense_32 , q_value_layer i created a list layers using tf.keras.layers api and stored it in dense_layers list and created a Sequential Network DQN_Agent `learning_rate = 1e-3 action_tensor_spec = tensor_spec.from_spec(tf_env.action_spec()) num_actions = action_tensor_spec.maximum - action_tensor_spec.minimum + 1 dense_layers = [] dense_layers.append(tf.keras.layers.Conv1D( 64

        M Offline
        M Offline
        Member 15078716
        wrote on last edited by
        #3

        Some more details about what this does step by step might help others to help you. Nice code though. Thanks. :thumbsup:

        1 Reply Last reply
        0
        Reply
        • Reply as topic
        Log in to reply
        • Oldest to Newest
        • Newest to Oldest
        • Most Votes


        • Login

        • Don't have an account? Register

        • Login or register to search.
        • First post
          Last post
        0
        • Categories
        • Recent
        • Tags
        • Popular
        • World
        • Users
        • Groups