# Local LLMS
## What about finetuning work?
- [[LLM Finetuning]]
## Dual 3090 build information
- [Recommendations for 2x RTX 3090 build? : r/buildapc](https://www.reddit.com/r/buildapc/comments/ltvjxs/recommendations_for_2x_rtx_3090_build/)
- [Dual RTX 3090 for deep learning build : r/buildapc](https://www.reddit.com/r/buildapc/comments/x4rlx5/dual_rtx_3090_for_deep_learning_build/)
- [Dual 3090 Build Help : r/buildapc](https://www.reddit.com/r/buildapc/comments/o19kye/dual_3090_build_help/)
- [Best z690 motherboard for dual 3090 build? : r/buildapc](https://www.reddit.com/r/buildapc/comments/rhoeo5/best_z690_motherboard_for_dual_3090_build/)
- [Dual 3090 FE Build : r/watercooling](https://www.reddit.com/r/watercooling/comments/n83593/dual_3090_fe_build/)
- [Dual 4090 Build - Seeking Advice : r/buildapc](https://www.reddit.com/r/buildapc/comments/yfqfs8/dual_4090_build_seeking_advice/)
This looks great:
- [Building My Own Deep Learning Rig · Den Delimarsky](https://den.dev/blog/deep-learning-rig/)
- [DUAL 3090 rtx SLI, Dual CPU, 1.5 TB Ram, quiet, compact case | TechPowerUp Forums](https://www.techpowerup.com/forums/threads/dual-3090-rtx-sli-dual-cpu-1-5-tb-ram-quiet-compact-case.273503/)
- [Recommendations on new 2 x RTX 3090 setup - Deep Learning - fast.ai Course Forums](https://forums.fast.ai/t/recommendations-on-new-2-x-rtx-3090-setup/78202/447?page=21)
- [Build A Capable Machine For LLM and AI | by Andrew Zhu | CodeX | Medium](https://medium.com/codex/build-a-capable-machine-for-llm-and-ai-4ae45ad9f959)
## Llama2 70B seem to be the most powerful
- 2x3090 notes:
- [Llama 2 70b how to run : r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/155x2fm/llama_2_70b_how_to_run/)
- 2x4090 Llama2 70B notes:
- [Exllama updated to support GQA and LLaMA-70B quants! : r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/153xlk3/comment/jslk1o6/)
- maybe best most technical writeup:
- [PC configuration to run a llama2 70B : r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/15eo58x/pc_configuration_to_run_a_llama2_70b/)
some instructions on downloading and all, but that's probably going to be obsolete real soon:
- https://sych.io/blog/how-to-run-llama-2-locally-a-guide-to-running-your-own-chatgpt-like-large-language-model/
![[Pasted image 20231013123605.png]]
This is pretty fucking good
![[Pasted image 20231013123620.png]]