The local large model was tinkered with all night, and the final choice was made:


1. Main model: Qwen-30B-Instruct, it's sufficient for daily tasks. (Instruction following is very good)

2. Inference backup: Left an 8-bit modified version of GPT OSS mlx. The 4-bit version of GPT-OSS doesn't work well, and maximizing the inference budget across three tiers is not very meaningful.

3. Coder in all directions, planning to use SOTA flagship directly without considering local models (after all, it's work)
GPT8.57%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)