June 25, 27 Are internal days for Intel employees & Intel partners.
Authors will submit their abstracts to Easy Chair, a public conference management site independent of Intel and should not disclose confidential information that they and/or their respective employers are not ready to share with the public.
Please bear in mind that no sales pitches will be allowed.
If your presentation involves advertisement of products or services please do not submit as your submission will be rejected.
Selected speakers might be required to sign an NDA (if it does not exist already) and a personal release waiver form to enable their pictures/bios to be posted on the Intel website.
Speakers will be allocated 45 minutes of presentation time with 10 minutes for Q&A (if your talk needs more/less time - please contact us).
This Intel paper flips on its head, the basic assumption of needing large models.
The researchers asked the question, “why don’t we start with a compact model and grow the model as we continue to train it?
-based RL methods, which rely heavily on GPUs, are commonly used by AI researchers today.
While these methods are able to exploit rewards for learning, they suffer from limited exploration and costly gradient computations.