Pre-recorded Sessions: From 4 December 2020 | Live Sessions: 10 – 13 December 2020

4 – 13 December 2020

#SIGGRAPHAsia | #SIGGRAPHAsia2020

Technical Papers

  • Ultimate Supporter Ultimate Supporter
  • Ultimate Attendee Ultimate Attendee

Date/Time: 04 – 13 December 2020
All presentations are available in the virtual platform on-demand.


Lecturer(s):
Ruizhen Hu, Shenzhen University, China
Juzhan Xu, Shenzhen University, China
Bin Chen, Shenzhen University, China
Minglun Gong, University of Guelph, Canada
Hao Zhang, Simon Fraser University, Canada
Hui Huang, Shenzhen University, China

Bio:

Description: We introduce the transport-and-pack problem and develop a neural optimization model to solve it based on reinforcement learning. Given an initial spatial configuration of boxes, we seek an efficient method to iteratively transport and pack the boxes compactly into a target container. In general, packing alone is a well-known, difficult combinatorial optimization problem. Due to obstruction and accessibility constraints, our problem has to add a transport planning dimension to the already immense search space. Using a learning-based approach, a trained network can learn and encode solution patterns to guide the solution of new problem instances instead of executing an expensive online search. In our work, we represent the various constraints using a precedence graph and train a neural network, coined TAP-Net, using reinforcement learning to reward efficient packing. The network is built on a recurrent neural network (RNN) which inputs the current precedence graph, as well as the current box packing state of the target container, and it outputs the next box to pack, as well as its orientation. We train our network on randomly generated initial box configurations, without supervision, via policy gradients to learn optimal TAP policies to maximize packing efficiency. We demonstrate the performance of TAP-Net on a variety of examples, evaluating the network through ablation studies and comparisons to baselines and heuristic search methods. We also show that our network generalizes well to larger problem instances, when trained on small-sized inputs.

 

Back