Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ultralytics Code Refactor https://ultralytics.com/actions #104

Merged
merged 2 commits into from
Sep 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

# πŸš€ Introduction

Welcome to the [COCO2YOLO](https://github.com/ultralytics/COCO2YOLO) repository! This toolkit is designed to help you convert datasets in JSON format, following the COCO (Common Objects in Context) standards, into YOLO (You Only Look Once) format, which is widely recognized for its efficiency in real-time object detection tasks.
Welcome to the [COCO2YOLO](https://github.com/ultralytics/JSON2YOLO) repository! This toolkit is designed to help you convert datasets in JSON format, following the COCO (Common Objects in Context) standards, into YOLO (You Only Look Once) format, which is widely recognized for its efficiency in real-time object detection tasks.

This process is essential for machine learning practitioners looking to train object detection models using the Darknet framework. Our code is flexible and can be utilized across various platforms including Linux, MacOS, and Windows.

Expand All @@ -25,7 +25,7 @@ If you find our tool useful for your research or development, please consider ci

# 🀝 Contribute

We welcome contributions from the community! Whether you're fixing bugs, adding new features, or improving documentation, your input is invaluable. Take a look at our [Contributing Guide](https://docs.ultralytics.com/help/contributing) to get started. Also, we'd love to hear about your experience with Ultralytics products. Please consider filling out our [Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey). A huge πŸ™ and thank you to all of our contributors!
We welcome contributions from the community! Whether you're fixing bugs, adding new features, or improving documentation, your input is invaluable. Take a look at our [Contributing Guide](https://docs.ultralytics.com/help/contributing) to get started. Also, we'd love to hear about your experience with Ultralytics products. Please consider filling out our [Survey](https://www.ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey). A huge πŸ™ and thank you to all of our contributors!

<!-- Ultralytics contributors -->

Expand All @@ -36,12 +36,12 @@ We welcome contributions from the community! Whether you're fixing bugs, adding

Ultralytics is excited to offer two different licensing options to meet your needs:

- **AGPL-3.0 License**: Perfect for students and hobbyists, this [OSI-approved](https://opensource.org/licenses/) open-source license encourages collaborative learning and knowledge sharing. Please refer to the [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) file for detailed terms.
- **Enterprise License**: Ideal for commercial use, this license allows for the integration of Ultralytics software and AI models into commercial products without the open-source requirements of AGPL-3.0. For use cases that involve commercial applications, please contact us via [Ultralytics Licensing](https://ultralytics.com/license).
- **AGPL-3.0 License**: Perfect for students and hobbyists, this [OSI-approved](https://opensource.org/license) open-source license encourages collaborative learning and knowledge sharing. Please refer to the [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) file for detailed terms.
- **Enterprise License**: Ideal for commercial use, this license allows for the integration of Ultralytics software and AI models into commercial products without the open-source requirements of AGPL-3.0. For use cases that involve commercial applications, please contact us via [Ultralytics Licensing](https://www.ultralytics.com/license).

# πŸ“¬ Contact Us

For bug reports, feature requests, and contributions, head to [GitHub Issues](https://github.com/ultralytics/JSON2YOLO/issues). For questions and discussions about this project and other Ultralytics endeavors, join us on [Discord](https://ultralytics.com/discord)!
For bug reports, feature requests, and contributions, head to [GitHub Issues](https://github.com/ultralytics/JSON2YOLO/issues). For questions and discussions about this project and other Ultralytics endeavors, join us on [Discord](https://discord.com/invite/ultralytics)!

<br>
<div align="center">
Expand Down
27 changes: 13 additions & 14 deletions general_json2yolo.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,13 +33,13 @@ def convert_infolks_json(name, files, img_path):

# filename
with open(name + ".txt", "a") as file:
file.write("%s\n" % f)
file.write(f"{f}\n")

# Write *.names file
names = sorted(np.unique(cat))
# names.pop(names.index('Missing product')) # remove
with open(name + ".names", "a") as file:
[file.write("%s\n" % a) for a in names]
[file.write(f"{a}\n") for a in names]

# Write labels file
for i, x in enumerate(tqdm(data, desc="Annotations")):
Expand All @@ -58,7 +58,7 @@ def convert_infolks_json(name, files, img_path):
box[[1, 3]] /= wh[i][1] # normalize y by height
box = [box[[0, 2]].mean(), box[[1, 3]].mean(), box[2] - box[0], box[3] - box[1]] # xywh
if (box[2] > 0.0) and (box[3] > 0.0): # if w > 0 and h > 0
file.write("%g %.6f %.6f %.6f %.6f\n" % (category_id, *box))
file.write("{:g} {:.6f} {:.6f} {:.6f} {:.6f}\n".format(category_id, *box))

# Split data into train, test, and validate files
split_files(name, file_name)
Expand Down Expand Up @@ -89,7 +89,7 @@ def convert_vott_json(name, files, img_path):
# Write *.names file
names = sorted(pd.unique(cat))
with open(name + ".names", "a") as file:
[file.write("%s\n" % a) for a in names]
[file.write(f"{a}\n") for a in names]

# Write labels file
n1, n2 = 0, 0
Expand All @@ -107,7 +107,7 @@ def convert_vott_json(name, files, img_path):

# append filename to list
with open(name + ".txt", "a") as file:
file.write("%s\n" % f)
file.write(f"{f}\n")

# write labelsfile
label_name = Path(f).stem + ".txt"
Expand All @@ -123,11 +123,11 @@ def convert_vott_json(name, files, img_path):
box = [box[0] + box[2] / 2, box[1] + box[3] / 2, box[2], box[3]] # xywh

if (box[2] > 0.0) and (box[3] > 0.0): # if w > 0 and h > 0
file.write("%g %.6f %.6f %.6f %.6f\n" % (category_id, *box))
file.write("{:g} {:.6f} {:.6f} {:.6f} {:.6f}\n".format(category_id, *box))
else:
missing_images.append(x["asset"]["name"])

print("Attempted %g json imports, found %g images, imported %g annotations successfully" % (i, n1, n2))
print(f"Attempted {i:g} json imports, found {n1:g} images, imported {n2:g} annotations successfully")
if len(missing_images):
print("WARNING, missing images:", missing_images)

Expand Down Expand Up @@ -203,7 +203,7 @@ def convert_ath_json(json_dir): # dir contains json annotations and images
] # xywh (left-top to center x-y)

if box[2] > 0.0 and box[3] > 0.0: # if w > 0 and h > 0
file.write("%g %.6f %.6f %.6f %.6f\n" % (category_id, *box))
file.write("{:g} {:.6f} {:.6f} {:.6f} {:.6f}\n".format(category_id, *box))
n3 += 1
nlabels += 1

Expand All @@ -224,7 +224,7 @@ def convert_ath_json(json_dir): # dir contains json annotations and images
ifile = dir + "images/" + Path(f).name
if cv2.imwrite(ifile, img): # if success append image to list
with open(dir + "data.txt", "a") as file:
file.write("%s\n" % ifile)
file.write(f"{ifile}\n")
n2 += 1 # correct images

except Exception:
Expand All @@ -236,16 +236,15 @@ def convert_ath_json(json_dir): # dir contains json annotations and images

nm = len(missing_images) # number missing
print(
"\nFound %g JSONs with %g labels over %g images. Found %g images, labelled %g images successfully"
% (len(jsons), n3, n1, n1 - nm, n2)
f"\nFound {len(jsons):g} JSONs with {n3:g} labels over {n1:g} images. Found {n1 - nm:g} images, labelled {n2:g} images successfully"
)
if len(missing_images):
print("WARNING, missing images:", missing_images)

# Write *.names file
names = ["knife"] # preserves sort order
with open(dir + "data.names", "w") as f:
[f.write("%s\n" % a) for a in names]
[f.write(f"{a}\n") for a in names]

# Split data into train, test, and validate files
split_rows_simple(dir + "data.txt")
Expand All @@ -266,15 +265,15 @@ def convert_coco_json(json_dir="../coco/annotations/", use_segments=False, cls91
data = json.load(f)

# Create image dict
images = {"%g" % x["id"]: x for x in data["images"]}
images = {"{:g}".format(x["id"]): x for x in data["images"]}
# Create image-annotations dict
imgToAnns = defaultdict(list)
for ann in data["annotations"]:
imgToAnns[ann["image_id"]].append(ann)

# Write labels file
for img_id, anns in tqdm(imgToAnns.items(), desc=f"Annotations {json_file}"):
img = images["%g" % img_id]
img = images[f"{img_id:g}"]
h, w, f = img["height"], img["width"], img["file_name"]

bboxes = []
Expand Down
8 changes: 4 additions & 4 deletions utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def split_files(out_path, file_name, prefix_path=""): # split training data
if item.any():
with open(f"{out_path}_{key}.txt", "a") as file:
for i in item:
file.write("%s%s\n" % (prefix_path, file_name[i]))
file.write(f"{prefix_path}{file_name[i]}\n")


def split_indices(x, train=0.9, test=0.1, validate=0.0, shuffle=True): # split training data
Expand Down Expand Up @@ -84,7 +84,7 @@ def make_dirs(dir="new_dir/"):
def write_data_data(fname="data.data", nc=80):
"""Writes a Darknet-style .data file with dataset and training configuration."""
lines = [
"classes = %g\n" % nc,
f"classes = {nc:g}\n",
"train =../out/data_train.txt\n",
"valid =../out/data_test.txt\n",
"names =../out/data.names\n",
Expand Down Expand Up @@ -153,7 +153,7 @@ def flatten_recursive_folders(path="../../Downloads/data/sm4/"): # from utils i
stem, suffix = f.stem, f.suffix
if suffix.lower()[1:] in img_formats:
n += 1
stem_new = "%g_" % n + stem
stem_new = f"{n:g}_" + stem
image_new = nidir / (stem_new + suffix) # converts all formats to *.jpg
json_new = njdir / f"{stem_new}.json"

Expand All @@ -164,7 +164,7 @@ def flatten_recursive_folders(path="../../Downloads/data/sm4/"): # from utils i
os.system(f"cp '{image}' '{image_new}'")
# cv2.imwrite(str(image_new), cv2.imread(str(image)))

print("Flattening complete: %g jsons and images" % n)
print(f"Flattening complete: {n:g} jsons and images")


def coco91_to_coco80_class(): # converts 80-index (val2014) to 91-index (paper)
Expand Down