The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError Exception: DatasetGenerationError Message: An error occurred while generating the dataset Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2116, in cast_array_to_feature return array_cast( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1804, in wrapper return func(array, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1962, in array_cast raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}") TypeError: Couldn't cast array of type double to null The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Open a discussion for direct support.
model
string | revision
string | private
bool | params
float64 | architectures
string | quant_type
string | precision
string | model_params
float64 | model_size
float64 | weight_dtype
string | compute_dtype
string | gguf_ftype
string | hardware
string | status
string | submitted_time
unknown | model_type
string | job_id
int64 | job_start_time
unknown | scripts
string | base_model
string | weight_type
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ISTA-DASLab/Llama-2-70b-AQLM-2Bit-1x16-hf | main | false | null | LlamaForCausalLM | AQLM | 2bit | null | null | int2 | float16 | *Q4_0.gguf | gpu | Pending | "2024-05-15T03:10:03" | quantization | -1 | null | ITREX | null | null |
ISTA-DASLab/Llama-2-7b-AQLM-2Bit-1x16-hf | main | false | 2.38 | LlamaForCausalLM | AQLM | 2bit | 6.48 | 2.38 | int2 | float16 | *Q4_0.gguf | gpu | Pending | "2024-05-15T03:44:59" | quantization | -1 | null | ITREX | null | null |
ISTA-DASLab/Llama-2-7b-AQLM-2Bit-8x8-hf | main | false | 2.73 | LlamaForCausalLM | AQLM | 2bit | 6.48 | 2.73 | int2 | float16 | *Q4_0.gguf | gpu | Pending | "2024-05-15T03:43:56" | quantization | -1 | null | ITREX | null | null |
Intel/neural-chat-7b-v3-1 | main | false | 7.242 | MistralForCausalLM | null | bfloat16 | null | null | null | null | null | null | RUNNING | "2024-04-08T08:42:16" | ๐ถ : fine-tuned on domain-specific datasets | 6 | "2024-04-09T04:05:14" | null | Original |
|
PawanKrd/Meta-Llama-3-8B-Instruct-GGUF | main | false | null | ? | llama.cpp | 4bit | null | null | int4 | float16 | *Q4_0.gguf | cpu | Pending | "2024-05-09T12:57:56" | quantization | -1 | null | llama_cpp | null | null |
PrunaAI/Phi-3-mini-128k-instruct-GGUF-Imatrix-smashed | main | false | null | ? | llama.cpp | 4bit | null | null | int4 | float16 | *Q4_0.gguf | cpu | Pending | "2024-05-10T07:32:46" | quantization | -1 | null | llama_cpp | null | null |
TheBloke/Falcon-7B-Instruct-GPTQ | main | false | 5.94 | RWForCausalLM | GPTQ | 4bit | 6.74 | 5.94 | int4 | float16 | *Q4_0.gguf | gpu | Pending | "2024-05-10T06:13:37" | quantization | -1 | null | ITREX | null | null |
TheBloke/Llama-2-13B-chat-AWQ | main | false | 7.25 | LlamaForCausalLM | AWQ | 4bit | 12.79 | 7.25 | int4 | float16 | *Q4_0.gguf | gpu | Pending | "2024-05-10T07:46:17" | quantization | -1 | null | ITREX | null | null |
TheBloke/Llama-2-13B-chat-GPTQ | main | false | 7.26 | LlamaForCausalLM | GPTQ | 4bit | 12.8 | 7.26 | int4 | float16 | *Q4_0.gguf | gpu | Pending | "2024-05-10T07:50:09" | quantization | -1 | null | ITREX | null | null |
TheBloke/Mistral-7B-Instruct-v0.2-AWQ | main | false | 4.15 | MistralForCausalLM | AWQ | 4bit | 7.03 | 4.15 | int4 | float16 | *Q4_0.gguf | gpu | Pending | "2024-05-09T02:54:02" | quantization | -1 | null | ITREX | null | null |
TheBloke/Mistral-7B-Instruct-v0.2-GPTQ | main | false | 4.16 | MistralForCausalLM | GPTQ | 4bit | 7.04 | 4.16 | int4 | float16 | *Q4_0.gguf | gpu | Pending | "2024-05-10T05:47:33" | quantization | -1 | null | ITREX | null | null |
TheBloke/Mixtral-8x7B-Instruct-v0.1-AWQ | main | false | 24.65 | MixtralForCausalLM | AWQ | 4bit | 46.8 | 24.65 | int4 | float16 | *Q4_0.gguf | gpu | Pending | "2024-05-10T06:21:34" | quantization | -1 | null | ITREX | null | null |
TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ | main | false | 23.81 | MixtralForCausalLM | GPTQ | 4bit | 46.5 | 23.81 | int4 | float16 | *Q4_0.gguf | gpu | Pending | "2024-05-13T11:54:45" | quantization | -1 | null | ITREX | null | null |
TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ | main | false | 5.98 | LlamaForCausalLM | GPTQ | 4bit | 10.57 | 5.98 | int4 | float16 | *Q4_0.gguf | gpu | Pending | "2024-05-09T09:03:57" | quantization | -1 | null | ITREX | null | null |
astronomer/Llama-3-8B-Instruct-GPTQ-4-Bit | main | false | 5.74 | LlamaForCausalLM | GPTQ | 4bit | 7.04 | 5.74 | int4 | float16 | *Q4_0.gguf | gpu | Pending | "2024-05-10T04:42:46" | quantization | -1 | null | ITREX | null | null |
casperhansen/falcon-7b-awq | main | false | 4.16 | RWForCausalLM | AWQ | 4bit | 8.33 | 4.16 | int4 | float16 | *Q4_0.gguf | gpu | Pending | "2024-05-10T06:47:20" | quantization | -1 | null | ITREX | null | null |
crusoeai/Llama-3-8B-Instruct-Gradient-1048k-GGUF | main | false | null | ? | llama.cpp | 4bit | null | null | int4 | int8 | *Q4_0.gguf | cpu | Pending | "2024-05-11T17:37:21" | quantization | -1 | null | llama_cpp | null | null |
cstr/Spaetzle-v60-7b-Q4_0-GGUF | main | false | null | ? | llama.cpp | 4bit | null | null | int4 | ? | *Q4_0.gguf | cpu | Pending | "2024-05-11T07:32:05" | quantization | -1 | null | llama_cpp | null | null |
cstr/Spaetzle-v60-7b-int4-inc | main | false | 4.16 | MistralForCausalLM | GPTQ | 4bit | 7.04 | 4.16 | int4 | float16 | *Q4_0.gguf | gpu | Pending | "2024-05-11T11:55:16" | quantization | -1 | null | ITREX | null | null |
facebook/opt-1.3b | main | false | 1.3 | OPTForCausalLM | null | bfloat16 | null | null | null | null | null | null | FINISHED | "2024-04-10T11:23:43" | ๐ข : pretrained | 2 | "2024-04-10T05:04:42" | null | Original |
|
facebook/opt-125m | main | false | 0.125 | OPTForCausalLM | null | bfloat16 | null | null | null | null | null | null | FINISHED | "2024-04-10T12:05:21" | ๐ข : pretrained | 3 | "2024-04-10T05:34:32" | null | Original |
|
facebook/opt-350m | main | false | 0.35 | OPTForCausalLM | Rtn | 8bit | null | null | int8 | null | null | null | FINISHED | "2024-04-11T05:48:05" | ๐ข : pretrained | 5 | "2024-04-10T23:51:27" | null | Original |
|
facebook/opt-350m | main | false | 0.35 | OPTForCausalLM | Rtn | 8bit | null | null | int8 | null | null | null | FINISHED | "2024-04-11T05:48:05" | ๐ข : pretrained | 6 | "2024-04-11T00:40:02" | null | Original |
|
facebook/opt-350m | main | false | 0.35 | OPTForCausalLM | null | bfloat16 | null | null | null | null | null | null | FINISHED | "2024-04-10T13:12:22" | ๐ข : pretrained | 4 | "2024-04-10T06:13:22" | null | Original |
|
leliuga/Llama-2-13b-chat-hf-bnb-4bit | main | false | 7.2 | LlamaForCausalLM | bitsandbytes | 4bit | 13.08 | 7.2 | int4 | float16 | *Q4_0.gguf | gpu | Pending | "2024-05-10T07:47:50" | quantization | -1 | null | ITREX | null | null |
leliuga/Phi-3-mini-128k-instruct-bnb-4bit | main | false | 2.26 | Phi3ForCausalLM | bitsandbytes | 4bit | 3.74 | 2.26 | int4 | float16 | *Q4_0.gguf | gpu | Pending | "2024-05-10T07:32:00" | quantization | -1 | null | ITREX | null | null |
lodrick-the-lafted/Olethros-8B-AWQ | main | false | 5.73 | LlamaForCausalLM | AWQ | 4bit | 7.03 | 5.73 | int4 | float16 | *Q4_0.gguf | gpu | Pending | "2024-05-11T18:47:40" | quantization | -1 | null | ITREX | null | null |
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 0