| 
							
							
								 FangYin Cheng | 95d3f5222b | feat(model): Support AquilaChat2-34B | 2023-10-30 11:48:48 +08:00 |  | 
			
				
					| 
							
							
								 FangYin Cheng | d5a52f79f1 | feat(model): Support vLLM | 2023-10-09 20:02:11 +08:00 |  | 
			
				
					| 
							
							
								 FangYin Cheng | 5dfe611478 | feat(ChatKnowledge): Add custom text separators and refactor log configuration | 2023-09-28 11:54:58 +08:00 |  | 
			
				
					| 
							
							
								 FangYin Cheng | 896af4e16f | chore: fix shutdown error when not install torch | 2023-09-21 12:20:24 +08:00 |  | 
			
				
					| 
							
							
								 FangYin Cheng | 986ada3aeb | feat(model): supports the deployment of multiple models through the API and add the corresponding command line interface | 2023-09-11 17:15:20 +08:00 |  | 
			
				
					| 
							
							
								 FangYin Cheng | b6a4fd8a62 | feat: Multi-model support with proxyllm and add more command-cli | 2023-09-05 11:26:24 +08:00 |  | 
			
				
					| 
							
							
								 FangYin Cheng | f19551a7cd | feat: Optimize code import time | 2023-09-01 10:40:18 +08:00 |  | 
			
				
					| 
							
							
								 FangYin Cheng | e4dd6060da | feat: Command-line tool design and multi-model integration | 2023-08-31 17:22:31 +08:00 |  | 
			
				
					| 
							
							
								 FangYin Cheng | dd86fb86b1 | feat: Multi-model command line | 2023-08-30 11:07:35 +08:00 |  | 
			
				
					| 
							
							
								 FangYin Cheng | d467092766 | merge multi-model and ChatExcel | 2023-08-29 22:54:18 +08:00 |  | 
			
				
					| 
							
							
								 FangYin Cheng | b5fd5d2a3a | feat: Support llama.cpp | 2023-08-15 19:00:08 +08:00 |  | 
			
				
					| 
							
							
								 csunny | 743863d52b | fix: set num_gpus reference for mps + cpu | 2023-08-03 16:54:58 +08:00 |  | 
			
				
					| 
							
							
								 csunny | d67a6a642a | fix: num_gpus referenced error for mps + cpu | 2023-08-03 16:52:39 +08:00 |  | 
			
				
					| 
							
							
								 FangYin Cheng | a4574aa614 | feat: Support vicuna-v1.5 and WizardLM-v1.2 | 2023-08-03 14:14:29 +08:00 |  | 
			
				
					| 
							
							
								 FangYin Cheng | d8a4b776d5 | feat: Support 8-bit quantization and 4-bit quantization for multi-gpu inference | 2023-08-02 19:29:59 +08:00 |  | 
			
				
					| 
							
							
								 zhanghy-sketchzh | 00d24101f3 | support multi gpus | 2023-06-14 00:22:02 +08:00 |  | 
			
				
					| 
							
							
								 csunny | fe8291b198 | feature: guanaco stream output | 2023-06-04 20:38:34 +08:00 |  | 
			
				
					| 
							
							
								 csunny | 09308bcdf0 | fix: guanaco model | 2023-05-31 14:13:12 +08:00 |  | 
			
				
					| 
							
							
								 csunny | 16c6986666 | fix: lint | 2023-05-30 19:11:34 +08:00 |  | 
			
				
					| 
							
							
								 csunny | ea334b172e | feature: add model server proxy | 2023-05-30 17:16:29 +08:00 |  | 
			
				
					| 
							
							
								 yihong0618 | b098a48898 | ci: make ci happy lint the code, delete unused imports Signed-off-by: yihong0618 <zouzou0208@gmail.com> | 2023-05-24 18:43:04 +08:00 |  | 
			
				
					| 
							
							
								 yihong0618 | 60ecde5892 | fix: can not answer on mac m1-> mps device | 2023-05-24 12:33:41 +08:00 |  | 
			
				
					| 
							
							
								 csunny | f52c7523b5 | llms: fix | 2023-05-21 14:54:16 +08:00 |  | 
			
				
					| 
							
							
								 csunny | ce72820085 | llms: add mps support | 2023-05-21 14:48:54 +08:00 |  | 
			
				
					| 
							
							
								 csunny | 4302ae9087 | Add: multi model support | 2023-05-18 15:44:29 +08:00 |  | 
			
				
					| 
							
							
								 csunny | 6d76825a10 | rm fschat relay | 2023-05-11 10:59:08 +08:00 |  | 
			
				
					| 
							
							
								 csunny | fd8bc8d169 | modelLoader use singleton | 2023-05-10 10:53:48 +08:00 |  | 
			
				
					| 
							
							
								 csunny | bfbbf0ba88 | update conversation | 2023-05-09 21:48:47 +08:00 |  | 
			
				
					| 
							
							
								 csunny | d746086694 | adjust project content | 2023-05-08 00:34:36 +08:00 |  | 
			
				
					| 
							
							
								 csunny | 539e98f1dc | fork file replace import | 2023-05-07 05:14:43 +08:00 |  | 
			
				
					| 
							
							
								 csunny | eca14bc038 | fix load model gpu oom | 2023-04-29 23:02:13 +08:00 |  | 
			
				
					| 
							
							
								 csunny | acf9dbbd82 | fix problem | 2023-04-29 21:50:47 +08:00 |  | 
			
				
					| 
							
							
								 csunny | 0767537606 | add vicuna embedding | 2023-04-29 18:28:42 +08:00 |  | 
			
				
					| 
							
							
								 csunny | e5ffb6582c | a demo | 2023-04-28 23:53:29 +08:00 |  | 
			
				
					| 
							
							
								 csunny | 0861a09a00 | init model and tokenizer | 2023-04-28 22:18:08 +08:00 |  | 
			
				
					| 
							
							
								 csunny | c72ae1a87f | model: add model file | 2023-04-28 22:04:37 +08:00 |  | 
			
				
					| 
							
							
								 csunny | 38f57e157c | init | 2023-04-28 21:59:18 +08:00 |  |