feat: implement base macro system

This commit is contained in:
Travis Abendshien
2025-02-22 21:49:14 -08:00
parent 8d7ba0dd86
commit 20d641d6f3
13 changed files with 1029 additions and 291 deletions

View File

@@ -25,3 +25,7 @@ A long string of text displayed as a box of text.
A date and time value.
- e.g: Date Published, Date Taken, etc.
<!-- prettier-ignore -->
!!! note
Datetime types are a work in progress, and can't currently be manually created or edited.

View File

@@ -2,49 +2,389 @@
icon: material/script-text
---
# :material-script-text: Tools & Macros
# :material-script-text: Macros
Tools and macros are features that serve to create a more fluid [library](libraries.md)-managing process, or provide some extra functionality. Please note that some are still in active development and will be fleshed out in future updates.
TagStudio features a configurable macro system which allows you to set up automatic or manually triggered actions to perform a wide array of operations on your [library](libraries.md). Each macro is stored in an individual script file and is created using [TOML](https://toml.io/en/) with a predefined schema described below. Macro files are stored in your library's ".TagStudio/macros" folder.
## Tools
## Schema Version
### Fix Unlinked Entries
The `schema_version` key declares which version of the macro schema is currently being used. Current schema version: 1.
This tool displays the number of unlinked [entries](entries.md), and some options for their resolution.
```toml
schema_version = 1
```
Refresh
: Scans through the library and updates the unlinked entry count.
## Triggers
Search & Relink
: Attempts to automatically find and reassign missing files.
The `triggers` key declares when a macro may be automatically ran. Macros can still be manually triggered even if they have automatic triggers defined.
Delete Unlinked Entries
: Displays a confirmation prompt containing the list of all missing files to be deleted before committing to or cancelling the operation.
- `on_open`: Run when the TagStudio library is opened.
- `on_refresh`: Run when the TagStudio library's directories have been refreshed.
- `on_new_entry`: Run a new [file entry](entries.md) that has been created.
### Fix Duplicate Files
```toml
triggers = ["on_new_entry"]
```
This tool allows for management of duplicate files in the library using a [DupeGuru](https://dupeguru.voltaicideas.net/) file.
## Actions
Load DupeGuru File
: load the "results" file created from a DupeGuru scan
Actions are defined as TOML tables and serve as an individual action that the macro will perform. An action table must have a unique name, but the name itself has no importance to the macro.
Mirror Entries
: Duplicate entries will have their contents mirrored across all instances. This allows for duplicate files to then be deleted with DupeGuru as desired, without losing the [field](fields.md) data that has been assigned to either. (Once deleted, the "Fix Unlinked Entries" tool can be used to clean up the duplicates)
```toml
[newgrounds]
```
### Create Collage
Action tables must have an `action` key with one of the following values:
This tool is a preview of an upcoming feature. When selected, TagStudio will generate a collage of all the contents in a Library, which can be found in the Library folder ("/your-folder/.TagStudio/collages/"). Note that this feature is still in early development, and doesn't yet offer any customization options.
- [`import_data`](#import-data): Import data from a supported external source.
- [`add_data`](#add-data): Add data declared inside the macro file.
## Macros
```toml
[newgrounds]
action = "import_data"
```
### Auto-fill [WIP]
### Import Data
Tool is in development and will be documented in a future update.
The `import_data` action allows you to import external data from any supported source. While some sources need explicit support (e.g. ID3, EXIF) generic sources such as JSON sidecar files can leverage wide array of data shaping options that allow the underlying data structure to be abstracted from TagStudio's internal data structures. This macro pairs very well with tools such as [gallery-dl](https://github.com/mikf/gallery-dl).
### Sort fields
### Add Data
Tool is in development. Will allow for user-defined sorting of [fields](fields.md).
The `add_data` action lets you add data to a [file entry](entries.md) given one or more conditional statements. Unlike the [`import_data`](#import-data) action, the `add_data` action adds data declared in the macro itself rather than importing it form a source external to the macro.
### Folders to Tags
### Data Sources
Creates tags from the existing folder structure in the library, which are previewed in a hierarchy view for the user to confirm. A tag will be created for each folder and applied to all entries, with each subfolder being linked to the parent folder as a [parent tag](tags.md#parent-tags). Tags will initially be named after the folders, but can be fully edited and customized afterwards.
#### Source Format
The `source_format` key is used in conjunction with the [`import_data`](#import-data) key to declare what type of source data will be imported from.
```toml
[newgrounds]
action = "import_data"
source_format = "json"
```
- `exif`: Embedded EXIF metadata
- `id3`: Embedded ID3 metadata
- `json`: A JSON formatted file
- `text`: A plain text file
- `xml`: An XML formatted file
- `xmp`: Embedded XMP metadata or an XMP sidecar file
#### Source Location
The `source_location` key is used in conjunction with the `import_data` key to declare where the metadata should be imported from. This can be a relative or absolute path, and can reference the targeted filename with the `{filename}` placeholder.
```toml
[newgrounds]
action = "import_data"
source_format = "json"
source_location = "{filename}.json" # Relative sidecar file
```
<!-- - `absolute`: An absolute file location
- `embedded`: Data that's embedded within the targeted file
- `sidecar`: A sidecar file with a relative file location -->
#### Embedded Metadata
If targeting embedded data, add the `is_embedded` key and set it to `true`. If no `source_location` is used then the file this macro is targeting will be used as a source.
```toml
[newgrounds]
action = "import_data"
source_format = "id3"
is_embedded = true
```
#### Source Filters
`source_filters` are used to declare a glob list of files that are able to be targeted by this action. An entry filepath only needs to fall under one of the given source filters in order for the macro to continue. If not, then the macro will be skipped for this file entry.
<!-- prettier-ignore -->
=== "import_data"
```toml
[newgrounds]
action = "import_data"
source_format = "json"
source_location = "{filename}.json"
source_filters = ["**/Newgrounds/**"]
```
=== "add_data"
```toml
[animated]
action = "add_data"
source_filters = ["**/*.gif", "**/*.apng"]
```
#### Value
The `value` key is specifically used with the [`add_data`](#add-data) action to define what value should be added to the file entry.
<!-- prettier-ignore -->
=== "Title Field"
```toml
[animated]
action = "add_data"
source_filters = ["**/*.gif", "**/*.apng"]
[animated.title]
ts_type = "text_line"
name = "Title"
value = "Animated Image"
```
=== "Tags"
```toml
[animated]
action = "add_data"
source_filters = ["**/*.gif", "**/*.apng"]
[animated.tags]
ts_type = "tags"
value = ["Animated"]
```
### Importing Data
With the [`import_data`](#import-data) action it's possible to import various types of data into your TagStudio library in the form of [tags](tags.md) and [fields](fields.md). Since TagStudio tags are more complex than other tag implementations you may be aware of, there are several options for fine-tuning how tags should be imported.
In order to target the specific kind of TagStudio data you wish to import as, create a table with your action name followed by the "key" name separated by a dot. In most object notation formats (e.g. JSON) this is the key of the key/value pair.
<!-- prettier-ignore -->
=== "Importable JSON Data"
```json
{
"newgrounds": {
"tags": ["tag1", "tag2"]
}
}
```
=== "TOML Macro"
```toml
[newgrounds]
# Newgrounds table info here
[newgrounds.tags]
# Tag key config here
```
Inside "key" table we can now declare additional information about the native data formats and how they should be imported into TagStudio.
<!-- #### Source Types
The `source_type` key allows for the explicit declaration of the type and/or format of the source data. When this key is omitted, TagStudio will default to the data type that makes the most sense for the destination [TagStudio type](#tagstudio-types).
- `string`: A character string (text)
- `integer`: An integer
- `float`: A floating point number
- `url`: A string with a special URL formatting pass
- [`ISO8601`](https://en.wikipedia.org/wiki/ISO_8601) A standard datetime format
- `list:string`: List of strings (text)
- `list:integer`: List of integers
- `list:float`: List of floating point numbers -->
#### TagStudio Types
The required `ts_type` key defines the destination data format inside TagStudio itself. This can be [tags](tags.md) or any [field](fields.md) type.
- [`tags`](tags.md)
- [`text_line`](fields.md#text-line)
- [`text_box`](fields.md#text-box)
- [`datetime`](fields.md#datetime)
<!-- prettier-ignore -->
=== "Title Field"
```toml
[newgrounds]
# newgrounds table info here
[newgrounds.title]
ts_type = "text_line"
name = "Title"
```
=== "Tags"
```toml
[newgrounds]
# newgrounds table info here
[newgrounds.tags]
ts_type = "tags"
```
### Multiple Imports per Key
You may wish to import from the same source key more than once with different instructions. In this case, wrap the table name in an extra pair of brackets for every duplicate key table. This ensures that TagStudio will individually processes each group of instructions for the single key.
<!-- prettier-ignore -->
=== "Single Import per Key"
```toml
[newgrounds]
# Newgrounds table info here
[newgrounds.artist]
ts_type = "tags"
use_context = false
on_missing = "create"
```
=== "Multiple Imports per Key"
```toml
[newgrounds]
# Newgrounds table info here
[[newgrounds.artist]]
ts_type = "tags"
use_context = false
on_missing = "skip"
[[newgrounds.artist]]
ts_type = "text_line"
name = "Artist"
```
### Field Specific Keys
#### Name
`name`: The name of the field to import into.
<!-- prettier-ignore -->
=== "text_line"
```toml
[newgrounds.user]
ts_type = "text_line"
name = "Author"
```
=== "text_box"
```toml
[newgrounds.content]
ts_type = "text_box"
name = "Description"
```
<!-- prettier-ignore -->
!!! note
As of writing (v9.5.0) TagStudio fields still do not allow for custom names. The macro system is designed to be forward-thinking with this feature in mind, however currently only existing TagStudio field names are considered valid. Any invalid field names will default to the "Notes" field.
### Tag Specific Keys
#### Delimiter
`delimiter`: The delimiter between tags to use.
<!-- prettier-ignore -->
=== "Comma + Space Separation"
```toml
[newgrounds.tags]
ts_type = "tags"
delimiter = ", "
```
=== "Newline Separation"
```toml
[newgrounds.tags]
ts_type = "tags"
delimiter = "\n"
```
#### Prefix
`prefix`: An optional prefix to remove.
Given a list of tags such as `["#tag1", "#tag2", "#tag3"]`, you may wish to remove the "#" prefix.
```toml
[instagram.tags]
ts_type = "tags"
prefix = "#"
```
#### Strict
`strict`: Determines what [names](tags.md#naming-tags) of the TagStudio tags should be used to compare against the source data when matching.
- `true`: Only match against the TagStudio tag [name](tags.md#name) field.
- `false` (Default): Match against any TagStudio tag name field including [shorthands](tags.md#shorthand), [aliases](tags.md#aliases), and the [disambiguation name](tags.md#disambiguation).
#### Use Context
`use_context`: Determines if TagStudio should use context clues from other source tags to provide more accurate tag matches.
- `true` (Default): Use context clue matching (slower, less ambiguous).
- `false`: Ignore surrounding source tags (faster, more ambiguous).
#### On Missing
`on_missing`: Determines the behavior of how to react to source tags with no match in the library.
- `"prompt"`: Ask the user if they wish to create, skip, or manually choose an existing tag.
- `"create"`: Automatically create a new TagStudio tag based on the source tag.
- `"skip"` (Default): Ignore the unmatched tags.
```toml
[newgrounds.tags]
ts_type = "tags"
strict = false
use_context = true
on_missing = "create"
```
### Manual Tag Mapping
If the results from the standard tag matching system aren't good enough to properly import specific source data into your TagStudio library, you have the option to manually specify mappings between source and destination tags. A table with the `.map` or `.reverse_map` suffixes will be used to map tags in the nearest scope.
<!-- prettier-ignore -->
=== "Global Scope"
```toml
# Applies to all actions in the macro file
[map]
```
=== "Action Scope"
```toml
# Applies to all tag keys in the "newgrounds" action
[newgrounds.map]
```
=== "Key Scope"
```toml
# Only applies to tags within the "ratings" key inside the "newgrounds" action
[newgrounds.ratings.map]
```
- `map`: Used for [1 to 0](#1-to-0-ignore-matches), [1 to 1](#1-to-1), and [1 to Many](#1-to-many) mappings.
- `reverse_map`: Used for [Many to 1](#many-to-1-reverse-map) mappings.
#### 1 to 0 (Ignore Matches)
By mapping the key of the source tag name to an empty string, you can ignore that tag when matching with your own library. This is useful if you're importing from a source that uses tags you don't wish to use or create inside your own libraries.
```toml
[newgrounds.tags.map]
# Source Tag Name = Nothing, Ignore Matches
favorite = ""
```
#### 1 to 1
By mapping the key of the source tag name to the name of one of your TagStudio tags, you can directly specify a destination tag while bypassing the matching algorithm.
<!-- prettier-ignore -->
!!! tip
Consider using tag [aliases](tags.md#aliases) instead of 1 to 1 mapping. This mapping technique is useful if you want to map a specific source tag to a destination tag that you otherwise don't consider to be an alternate name for the destination tag.
```toml
[newgrounds.tags.map]
# Source Tag Name = TagStudio Tag Name
colored_pencil = "Drawing"
```
#### 1 to Many
By mapping the key of the source tag name to a list of your TagStudio tag names, you can cause one source tag to import as more than one of your TagStudio tags.
```toml
[newgrounds.tags.map]
# Source Tag Name = List of TagStudio Tag Names
drawing = ["Drawing (2D)", "Image (Meta Tags)"]
video = ["Animation (2D)", "Animated (Meta Tags)"]
```
#### Many to 1 (Reverse Map)
By mapping a key of the name of one of your TagStudio tags to a list of source tags, you can declare a combination of required source tags that result in a wholly new matched TagStudio tag. This is useful if you use a single tag in your TagStudio library that is represented by multiple split tags from your source.
```toml
[newgrounds.tags.reverse_map]
# TagStudio Tag Name = List of Source Tag Names
"Animation (2D)" = ["drawing", "video"]
"Animation (3D)" = ["3D", "video"]
```

34
docs/relinking.md Normal file
View File

@@ -0,0 +1,34 @@
---
title: Entry Relinking
icon: material/link-variant
---
# :material-link-variant: Entry Relinking
### Fix Unlinked Entries
This tool displays the number of unlinked [entries](entries.md), and some options for their resolution.
Refresh
- Scans through the library and updates the unlinked entry count.
Search & Relink
- Attempts to automatically find and reassign missing files.
Delete Unlinked Entries
- Displays a confirmation prompt containing the list of all missing files to be deleted before committing to or cancelling the operation.
### Fix Duplicate Files
This tool allows for management of duplicate files in the library using a [DupeGuru](https://dupeguru.voltaicideas.net/) file.
Load DupeGuru File
- load the "results" file created from a DupeGuru scan
Mirror Entries
- Duplicate entries will have their contents mirrored across all instances. This allows for duplicate files to then be deleted with DupeGuru as desired, without losing the [field](fields.md) data that has been assigned to either. (Once deleted, the "Fix Unlinked Entries" tool can be used to clean up the duplicates)

View File

@@ -252,9 +252,15 @@ Discrete library objects representing [attributes](<https://en.wikipedia.org/wik
- [ ] On Library Refresh :material-chevron-triple-up:{ .priority-high title="High Priority" }
- [ ] [...]
- [ ] Actions **[v9.5.x]**
- [ ] Add Tag(s) :material-chevron-triple-up:{ .priority-high title="High Priority" }
- [ ] Add Field(s) :material-chevron-triple-up:{ .priority-high title="High Priority" }
- [ ] Set Field Content :material-chevron-triple-up:{ .priority-high title="High Priority" }
- [x] Import from JSON file :material-chevron-triple-up:{ .priority-high title="High Priority" }
- [ ] Import from plaintext file :material-chevron-triple-up:{ .priority-high title="High Priority" }
- [ ] Import from XML file :material-chevron-triple-up:{ .priority-high title="High Priority" }
- [x] Create templated fields from other table keys :material-chevron-triple-up:{ .priority-high title="High Priority" }
- [x] Remove tag prefixes from import sources :material-chevron-triple-up:{ .priority-high title="High Priority" }
- [x] Specify tag delimiters from import sources :material-chevron-triple-up:{ .priority-high title="High Priority" }
- [x] Add data (tags + fields) configured in macro :material-chevron-triple-up:{ .priority-high title="High Priority" }
- [x] Glob filter for entry file :material-chevron-triple-up:{ .priority-high title="High Priority" }
- [x] Map source tags to TagStudio tags :material-chevron-triple-up:{ .priority-high title="High Priority" }
- [ ] [...]
### :material-table-arrow-right: Sharable Data
@@ -274,7 +280,7 @@ Packs are intended as an easy way to import and export specific data between lib
- [ ] UUIDs + Namespaces :material-chevron-triple-up:{ .priority-high title="High Priority" }
- [ ] Standard, Human Readable Format (TOML) :material-chevron-triple-up:{ .priority-high title="High Priority" }
- [ ] Versioning System :material-chevron-double-up:{ .priority-med title="Medium Priority" }
- [ ] Macro Sharing :material-chevron-triple-up:{ .priority-high title="High Priority" } **[v9.5.x]**
- [x] Macro Sharing :material-chevron-triple-up:{ .priority-high title="High Priority" } **[v9.5.x]**
- [ ] Importable :material-chevron-triple-up:{ .priority-high title="High Priority" }
- [ ] Exportable :material-chevron-triple-up:{ .priority-high title="High Priority" }
- [ ] Sharable Entry Data :material-chevron-double-up:{ .priority-med title="Medium Priority" } **[v9.9.x]**

View File

@@ -43,8 +43,10 @@ nav:
- entries.md
- preview-support.md
- search.md
- ignore.md
- macros.md
- Management:
- relinking.md
- ignore.md
- Fields:
- fields.md
- Tags:

View File

@@ -10,6 +10,7 @@ TS_FOLDER_NAME: str = ".TagStudio"
BACKUP_FOLDER_NAME: str = "backups"
COLLAGE_FOLDER_NAME: str = "collages"
IGNORE_NAME: str = ".ts_ignore"
MACROS_FOLDER_NAME: str = "macros"
THUMB_CACHE_NAME: str = "thumbs"
FONT_SAMPLE_TEXT: str = (

View File

@@ -51,14 +51,6 @@ class OpenStatus(enum.IntEnum):
CORRUPTED = 2
class MacroID(enum.Enum):
AUTOFILL = "autofill"
SIDECAR = "sidecar"
BUILD_URL = "build_url"
MATCH = "match"
CLEAN_URL = "clean_url"
class DefaultEnum(enum.Enum):
"""Allow saving multiple identical values in property called .default."""

View File

@@ -1062,19 +1062,21 @@ class Library:
selectinload(Tag.parent_tags),
selectinload(Tag.aliases),
)
if limit > 0:
query = query.limit(limit)
if name:
query = query.where(
or_(
Tag.name.icontains(name),
Tag.shorthand.icontains(name),
TagAlias.name.icontains(name),
Tag.name.istartswith(name),
Tag.shorthand.istartswith(name),
TagAlias.name.istartswith(name),
)
)
direct_tags = set(session.scalars(query))
ancestor_tag_ids: list[Tag] = []
for tag in direct_tags:
ancestor_tag_ids.extend(
@@ -1092,13 +1094,13 @@ class Library:
{at for at in ancestor_tags if at not in direct_tags},
]
logger.info(
"searching tags",
search=name,
limit=limit,
statement=str(query),
results=len(res),
)
# logger.info(
# "searching tags",
# search=name,
# limit=limit,
# statement=str(query),
# results=len(res),
# )
session.expunge_all()
@@ -1256,7 +1258,7 @@ class Library:
with Session(self.engine) as session:
return {x.key: x for x in session.scalars(select(ValueType)).all()}
def get_value_type(self, field_key: str) -> ValueType:
def get_value_type(self, field_key: str | None) -> ValueType | None:
with Session(self.engine) as session:
field = unwrap(session.scalar(select(ValueType).where(ValueType.key == field_key)))
session.expunge(field)
@@ -1269,6 +1271,7 @@ class Library:
field: ValueType | None = None,
field_id: FieldID | str | None = None,
value: str | datetime | None = None,
skip_on_exists: bool = False,
) -> bool:
logger.info(
"[Library][add_field_to_entry]",
@@ -1285,6 +1288,27 @@ class Library:
field_id = field_id.name
field = self.get_value_type(unwrap(field_id))
if not field:
logger.error(
"[Library] Could not add field to entry, invalid field type.", entry_id=entry_id
)
return False
if skip_on_exists:
entry = self.get_entry_full(entry_id, with_tags=False)
if not entry:
logger.exception("[Library] Entry does not exist", entry_id=entry_id)
return False
for field_ in entry.fields:
if field_.value == value and field_.type_key == field_id:
logger.info(
"[Library] Field value already exists for entry",
entry_id=entry_id,
value=value,
type=field_id,
)
return False
field_model: TextField | DatetimeField
if field.type in (FieldTypeEnum.TEXT_LINE, FieldTypeEnum.TEXT_BOX):
field_model = TextField(
@@ -1463,10 +1487,21 @@ class Library:
for tag_id in tag_ids_:
for entry_id in entry_ids_:
try:
logger.info(
"[Library][add_tags_to_entries] Adding tag to entry...",
tag_id=tag_id,
entry_id=entry_id,
)
session.add(TagEntry(tag_id=tag_id, entry_id=entry_id))
total_added += 1
session.commit()
except IntegrityError:
except IntegrityError as e:
logger.warning(
"[Library][add_tags_to_entries] Tag already on entry",
warning=e,
tag_id=tag_id,
entry_id=entry_id,
)
session.rollback()
return total_added
@@ -1563,6 +1598,7 @@ class Library:
return tag
# TODO: Fix and consolidate code with search_tags()
def get_tag_by_name(self, tag_name: str) -> Tag | None:
with Session(self.engine) as session:
statement = (

View File

@@ -0,0 +1,425 @@
# Copyright (C) 2025 Travis Abendshien (CyanVoxel).
# Licensed under the GPL-3.0 License.
# Created for TagStudio: https://github.com/CyanVoxel/TagStudio
import json
from enum import StrEnum
from pathlib import Path
from typing import Any, override
import structlog
import toml
from wcmatch import glob
from tagstudio.core.library.alchemy.fields import FieldID
logger = structlog.get_logger(__name__)
SCHEMA_VERSION = "schema_version"
TRIGGERS = "triggers"
ACTION = "action"
SOURCE_LOCATION = "source_location"
SOURCE_FILER = "source_filters"
SOURCE_FORMAT = "source_format"
FILENAME_PLACEHOLDER = "{filename}"
EXT_PLACEHOLDER = "{ext}"
TEMPLATE = "template"
SOURCE_TYPE = "source_type"
TS_TYPE = "ts_type"
NAME = "name"
VALUE = "value"
TAGS = "tags"
TEXT_LINE = "text_line"
TEXT_BOX = "text_box"
DATETIME = "datetime"
PREFIX = "prefix"
DELIMITER = "delimiter"
STRICT = "strict"
USE_CONTEXT = "use_context"
ON_MISSING = "on_missing"
JSON = "json"
XMP = "xmp"
EXIF = "exif"
ID3 = "id3"
MAP = "map"
REVERSE_MAP = "reverse_map"
class Actions(StrEnum):
IMPORT_DATA = "import_data"
ADD_DATA = "add_data"
class OnMissing(StrEnum):
PROMPT = "prompt"
CREATE = "create"
SKIP = "skip"
class DataResult:
def __init__(self) -> None:
pass
class FieldResult(DataResult):
def __init__(self, content, name: FieldID, field_type: str) -> None:
super().__init__()
self.content = content
self.name = name
self.type = field_type
@override
def __str__(self) -> str:
return str(self.content)
class TagResult(DataResult):
def __init__(
self,
tag_strings: list[str],
use_context: bool = True,
strict: bool = False,
on_missing: str = OnMissing.SKIP,
prefix: str = "",
) -> None:
super().__init__()
self.tag_strings = tag_strings
self.use_context = use_context
self.strict = strict
self.on_missing = on_missing
self.prefix = prefix
@override
def __str__(self) -> str:
return str(self.tag_strings)
def parse_macro_file(
macro_path: Path,
filepath: Path,
) -> list[DataResult]:
"""Parse a macro file and return a list of actions for TagStudio to perform.
Args:
macro_path (Path): The full path of the macro file.
filepath (Path): The filepath associated with Entry being operated upon.
"""
results: list[DataResult] = []
logger.info("[MacroParser] Parsing Macro", macro_path=macro_path, filepath=filepath)
if not macro_path.exists():
logger.error("[MacroParser] Macro path does not exist", macro_path=macro_path)
return results
if not macro_path.exists():
logger.error("[MacroParser] Filepath does not exist", filepath=filepath)
return results
with open(macro_path) as f:
macro = toml.load(f)
logger.info(macro)
# Check Schema Version
schema_ver = macro.get(SCHEMA_VERSION, 0)
if not isinstance(schema_ver, int):
logger.error(
f"[MacroParser] Incorrect type for {SCHEMA_VERSION}, expected int",
schema_ver=schema_ver,
)
return results
if schema_ver != 1:
logger.error(f"[MacroParser] Unsupported Schema Version: {schema_ver}")
return results
logger.info(f"[MacroParser] Schema Version: {schema_ver}")
# Load Triggers
triggers = macro[TRIGGERS]
if not isinstance(triggers, list):
logger.error(
f"[MacroParser] Incorrect type for {TRIGGERS}, expected list", triggers=triggers
)
# Parse each action table
for table_key in macro:
if table_key in {SCHEMA_VERSION, TRIGGERS}:
continue
logger.info("[MacroParser] Parsing Table", table_key=table_key)
table: dict[str, Any] = macro[table_key]
logger.info(table.keys())
# TODO: Replace with table conditionals
source_filters: list[str] = table.get(SOURCE_FILER, [])
conditions_met: bool = False
if not source_filters:
logger.info('[MacroParser] No "{SOURCE_FILER}" provided')
else:
for filter_ in source_filters:
if glob.globmatch(filepath, filter_, flags=glob.GLOBSTAR):
logger.info(
f"[MacroParser] [{table_key}] "
f'{SOURCE_FILER}" Met filter requirement: {filter_}'
)
conditions_met = True
if not conditions_met:
logger.warning(
f"[MacroParser] [{table_key}] File didn't meet any path filter requirement",
filters=source_filters,
filepath=filepath,
)
continue
action: str = table.get(ACTION, "")
logger.info(f'[MacroParser] [{table_key}] "{ACTION}": {action}')
if action == Actions.IMPORT_DATA:
results.extend(import_data(table, table_key, filepath))
elif action == Actions.ADD_DATA:
results.extend(add_data(table))
logger.info(results)
return results
def import_data(table: dict[str, Any], table_key: str, filepath: Path) -> list[DataResult]:
"""Process an import_data instruction and return a list of DataResults.
Importing data refers to importing data from a source external to TagStudio or any macro.
"""
results: list[DataResult] = []
source_format: str = str(table.get(SOURCE_FORMAT, ""))
if not source_format:
logger.error('[MacroParser] Parser Error: No "{SOURCE_FORMAT}" provided for table')
logger.info(f'[MacroParser] [{table_key}] "{SOURCE_FORMAT}": {source_format}')
raw_source_location = str(table.get(SOURCE_LOCATION, ""))
if FILENAME_PLACEHOLDER in raw_source_location:
# logger.info(f"[MacroParser] Filename placeholder detected: {raw_source_location}")
raw_source_location = raw_source_location.replace(FILENAME_PLACEHOLDER, str(filepath.stem))
if EXT_PLACEHOLDER in raw_source_location:
# logger.info(f"[MacroParser] File extension placeholder detected: {raw_source_location}")
# TODO: Make work with files that have multiple suffixes, like .tar.gz
raw_source_location = raw_source_location.replace(
EXT_PLACEHOLDER,
str(filepath.suffix)[1:], # Remove leading "."
)
if not raw_source_location.startswith(("/", "~")):
# The source location must be relative to the given filepath
source_location = filepath.parent / Path(raw_source_location)
else:
source_location = Path(raw_source_location)
logger.info(f'[MacroParser] [{table_key}] "{SOURCE_LOCATION}": {source_location}')
if not source_location.exists():
logger.error(
"[MacroParser] Sidecar filepath does not exist", source_location=source_location
)
return results
if source_format.lower() in JSON:
logger.info("[MacroParser] Parsing JSON sidecar file", sidecar_path=source_location)
with open(source_location, encoding="utf8") as f:
json_dump = json.load(f)
if not json_dump:
logger.warning("[MacroParser] Empty JSON sidecar file")
return results
logger.info(json_dump.items())
for key, table_value in table.items():
objects: list[dict[str, Any] | str] = []
content_value = ""
if isinstance(table_value, list):
objects = table_value
else:
objects.append(table_value)
for obj in objects:
if not isinstance(obj, dict):
continue
ts_type: str = str(obj.get(TS_TYPE, ""))
if not ts_type:
logger.warning(
f'[MacroParser] [{table_key}] No "{TS_TYPE}" key provided, skipping'
)
continue
if key in json_dump:
json_value = json_dump.get(key)
logger.info(
f"[MacroParser] [{table_key}] Parsing JSON sidecar key",
key=key,
table_value=obj,
json_value=json_value,
)
content_value = json_value
if not json_value or isinstance(json_value, str) and not json_value.strip():
logger.warning(
f"[MacroParser] [{table_key}] Value for key was empty, skipping"
)
continue
elif key == TEMPLATE:
template: str = str(obj.get(TEMPLATE, ""))
logger.info(f"[MacroParser] [{table_key}] Filling template", template=template)
if not template:
logger.warning(f"[MacroParser] [{table_key}] Empty template, skipping")
continue
for k in json_dump:
template = fill_template(template, json_dump, k)
logger.info(f"[MacroParser] [{table_key}] Template filled!", template=template)
content_value = template
else:
continue
# TODO: Determine if the source_type is even really ever needed
# source_type: str = str(tab_value.get(SOURCE_TYPE, ""))
str_name: str = str(obj.get(NAME, FieldID.NOTES.name))
name: FieldID = FieldID.NOTES
for fid in FieldID:
field_id = str_name.upper().replace(" ", "_")
if field_id == fid.name:
name = fid
continue
if ts_type == TAGS:
use_context: bool = bool(obj.get(USE_CONTEXT, False))
on_missing: str = str(obj.get(ON_MISSING, OnMissing.SKIP))
strict: bool = bool(obj.get(STRICT, False))
delimiter: str = ""
tag_strings: list[str] = []
# Tags are part of a single string
if isinstance(content_value, str):
delimiter = str(obj.get(DELIMITER, ""))
if delimiter:
# Split string based on given delimiter
tag_strings = content_value.split(delimiter)
else:
# If no delimiter is provided, assume the string is a single tag
tag_strings.append(content_value)
else:
tag_strings = content_value
# Remove a prefix (if given) from all tags strings (if any)
prefix = str(obj.get(PREFIX, ""))
if prefix:
tag_strings = [t.lstrip(prefix) for t in tag_strings]
# Swap any mapped tags for their new tag values
tag_map: dict[str, str] = obj.get(MAP, {})
mapped: list[str] = []
if tag_map:
for map_key, map_value in tag_map.items():
if map_key in tag_strings:
logger.info("[MacroParser] Mapping tag", old=map_key, new=map_value)
if isinstance(map_value, list):
mapped.extend(map_value)
else:
mapped.append(map_value)
tag_strings.remove(map_key)
tag_strings.extend(mapped)
logger.info("[MacroParser] Found tags", tag_strings=tag_strings)
results.append(
TagResult(
tag_strings=tag_strings,
use_context=use_context,
strict=strict,
on_missing=on_missing,
prefix="",
)
)
elif ts_type in (TEXT_LINE, TEXT_BOX, DATETIME):
results.append(
FieldResult(content=content_value, name=name, field_type=ts_type)
)
else:
logger.error('[MacroParser] [{table_key}] Unknown "{TS_TYPE}"', type=ts_type)
return results
def add_data(table: dict[str, Any]) -> list[DataResult]:
"""Process an add_data instruction and return a list of DataResults.
Adding data refers to adding data defined inside a TagStudio macro, not from an external source.
"""
results: list[DataResult] = []
logger.error(table)
for table_value in table.values():
objects: list[dict[str, Any] | str] = []
if isinstance(table_value, list):
objects = table_value
else:
objects.append(table_value)
for obj in objects:
if not isinstance(obj, dict):
continue
ts_type = obj.get(TS_TYPE, "")
if ts_type == TAGS:
tag_strings: list[str] = obj.get(VALUE, [])
logger.error(tag_strings)
results.append(
TagResult(
tag_strings=tag_strings,
use_context=False,
)
)
elif ts_type in (TEXT_LINE, TEXT_BOX, DATETIME):
str_name: str = str(obj.get(NAME, FieldID.NOTES.name))
name: FieldID = FieldID.NOTES
for fid in FieldID:
field_id = str_name.upper().replace(" ", "_")
if field_id == fid.name:
name = fid
continue
content_value: str = str(obj.get(VALUE, ""))
results.append(FieldResult(content=content_value, name=name, field_type=ts_type))
return results
def fill_template(
template: str, table: dict[str, Any], table_key: str, template_key: str = ""
) -> str:
"""Replaces placeholder keys in a string with the value from that table.
Args:
template (str): The string containing placeholder keys.
Key names should be surrounded in curly braces. (e.g. "{key}").
Nested keys should be accessed with square bracket syntax. (e.g. "{key[nested_key]}").
table (dict[str, Any]): The table to lookup values from.
table_key (str): The key to search for in the template and access the table with.
template_key (str): Similar to table_key, but is not used for accessing the table and
is instead used for representing the template key syntax for nested keys.
Used in recursive calls.
"""
key = template_key or table_key
value = table.get(table_key, "")
if isinstance(value, dict):
for v in value:
normalized_key: str = f"{key}[{str(v)}]"
template.replace(f"{{{normalized_key}}}", f"{{{str(v)}}}")
template = fill_template(template, value, str(v), normalized_key)
value = str(value)
return template.replace(f"{{{key}}}", f"{value}")

View File

@@ -1,183 +0,0 @@
# Copyright (C) 2024 Travis Abendshien (CyanVoxel).
# Licensed under the GPL-3.0 License.
# Created for TagStudio: https://github.com/CyanVoxel/TagStudio
"""The core classes and methods of TagStudio."""
import json
from pathlib import Path
import structlog
from tagstudio.core.constants import TS_FOLDER_NAME
from tagstudio.core.library.alchemy.fields import FieldID
from tagstudio.core.library.alchemy.library import Library
from tagstudio.core.library.alchemy.models import Entry
logger = structlog.get_logger(__name__)
class TagStudioCore:
def __init__(self):
self.lib: Library = Library()
@classmethod
def get_gdl_sidecar(cls, filepath: Path, source: str = "") -> dict:
"""Attempt to open and dump a Gallery-DL Sidecar file for the filepath.
Return a formatted object with notable values or an empty object if none is found.
"""
info = {}
_filepath = filepath.parent / (filepath.name + ".json")
# NOTE: This fixes an unknown (recent?) bug in Gallery-DL where Instagram sidecar
# files may be downloaded with indices starting at 1 rather than 0, unlike the posts.
# This may only occur with sidecar files that are downloaded separate from posts.
if source == "instagram" and not _filepath.is_file():
newstem = _filepath.stem[:-16] + "1" + _filepath.stem[-15:]
_filepath = _filepath.parent / (newstem + ".json")
logger.info("get_gdl_sidecar", filepath=filepath, source=source, sidecar=_filepath)
try:
with open(_filepath, encoding="utf8") as f:
json_dump = json.load(f)
if not json_dump:
return {}
if source == "twitter":
info[FieldID.DESCRIPTION] = json_dump["content"].strip()
info[FieldID.DATE_PUBLISHED] = json_dump["date"]
elif source == "instagram":
info[FieldID.DESCRIPTION] = json_dump["description"].strip()
info[FieldID.DATE_PUBLISHED] = json_dump["date"]
elif source == "artstation":
info[FieldID.TITLE] = json_dump["title"].strip()
info[FieldID.ARTIST] = json_dump["user"]["full_name"].strip()
info[FieldID.DESCRIPTION] = json_dump["description"].strip()
info[FieldID.TAGS] = json_dump["tags"]
# info["tags"] = [x for x in json_dump["mediums"]["name"]]
info[FieldID.DATE_PUBLISHED] = json_dump["date"]
elif source == "newgrounds":
# info["title"] = json_dump["title"]
# info["artist"] = json_dump["artist"]
# info["description"] = json_dump["description"]
info[FieldID.TAGS] = json_dump["tags"]
info[FieldID.DATE_PUBLISHED] = json_dump["date"]
info[FieldID.ARTIST] = json_dump["user"].strip()
info[FieldID.DESCRIPTION] = json_dump["description"].strip()
info[FieldID.SOURCE] = json_dump["post_url"].strip()
except Exception:
logger.exception("Error handling sidecar file.", path=_filepath)
return info
# def scrape(self, entry_id):
# entry = self.lib.get_entry(entry_id)
# if entry.fields:
# urls: list[str] = []
# if self.lib.get_field_index_in_entry(entry, 21):
# urls.extend([self.lib.get_field_attr(entry.fields[x], 'content')
# for x in self.lib.get_field_index_in_entry(entry, 21)])
# if self.lib.get_field_index_in_entry(entry, 3):
# urls.extend([self.lib.get_field_attr(entry.fields[x], 'content')
# for x in self.lib.get_field_index_in_entry(entry, 3)])
# # try:
# if urls:
# for url in urls:
# url = "https://" + url if 'https://' not in url else url
# html_doc = requests.get(url).text
# soup = bs(html_doc, "html.parser")
# print(soup)
# input()
# # except:
# # # print("Could not resolve URL.")
# # pass
@classmethod
def match_conditions(cls, lib: Library, entry_id: int) -> bool:
"""Match defined conditions against a file to add Entry data."""
# TODO - what even is this file format?
# TODO: Make this stored somewhere better instead of temporarily in this JSON file.
cond_file = lib.library_dir / TS_FOLDER_NAME / "conditions.json"
if not cond_file.is_file():
return False
entry: Entry = lib.get_entry(entry_id)
try:
with open(cond_file, encoding="utf8") as f:
json_dump = json.load(f)
for c in json_dump["conditions"]:
match: bool = False
for path_c in c["path_conditions"]:
if Path(path_c).is_relative_to(entry.path):
match = True
break
if not match:
return False
if not c.get("fields"):
return False
fields = c["fields"]
entry_field_types = {field.type_key: field for field in entry.fields}
for field in fields:
is_new = field["id"] not in entry_field_types
field_key = field["id"]
if is_new:
lib.add_field_to_entry(entry.id, field_key, field["value"])
else:
lib.update_entry_field(entry.id, field_key, field["value"])
except Exception:
logger.exception("Error matching conditions.", entry=entry)
return False
@classmethod
def build_url(cls, entry: Entry, source: str):
"""Try to rebuild a source URL given a specific filename structure."""
source = source.lower().replace("-", " ").replace("_", " ")
if "twitter" in source:
return cls._build_twitter_url(entry)
elif "instagram" in source:
return cls._build_instagram_url(entry)
@classmethod
def _build_twitter_url(cls, entry: Entry):
"""Build a Twitter URL given a specific filename structure.
Method expects filename to be formatted as 'USERNAME_TWEET-ID_INDEX_YEAR-MM-DD'
"""
try:
stubs = str(entry.path.name).rsplit("_", 3)
url = f"www.twitter.com/{stubs[0]}/status/{stubs[-3]}/photo/{stubs[-2]}"
return url
except Exception:
logger.exception("Error building Twitter URL.", entry=entry)
return ""
@classmethod
def _build_instagram_url(cls, entry: Entry):
"""Build an Instagram URL given a specific filename structure.
Method expects filename to be formatted as 'USERNAME_POST-ID_INDEX_YEAR-MM-DD'
"""
try:
stubs = str(entry.path.name).rsplit("_", 2)
# stubs[0] = stubs[0].replace(f"{author}_", '', 1)
# print(stubs)
# NOTE: Both Instagram usernames AND their ID can have underscores in them,
# so unless you have the exact username (which can change) on hand to remove,
# your other best bet is to hope that the ID is only 11 characters long, which
# seems to more or less be the case... for now...
url = f"www.instagram.com/p/{stubs[-3][-11:]}"
return url
except Exception:
logger.exception("Error building Instagram URL.", entry=entry)
return ""

View File

@@ -45,24 +45,34 @@ from PySide6.QtWidgets import (
QScrollArea,
)
import tagstudio.qt.resources_rc # noqa: F401
from tagstudio.core.constants import TAG_ARCHIVED, TAG_FAVORITE, VERSION, VERSION_BRANCH
# This import has side-effect of import PySide resources
import tagstudio.qt.resources_rc # noqa: F401 # pyright: ignore [reportUnusedImport]
from tagstudio.core.constants import (
MACROS_FOLDER_NAME,
TAG_ARCHIVED,
TAG_FAVORITE,
TS_FOLDER_NAME,
VERSION,
VERSION_BRANCH,
)
from tagstudio.core.driver import DriverMixin
from tagstudio.core.enums import MacroID, SettingItems, ShowFilepathOption
from tagstudio.core.enums import SettingItems, ShowFilepathOption
from tagstudio.core.library.alchemy.enums import (
BrowsingState,
FieldTypeEnum,
SortingModeEnum,
)
from tagstudio.core.library.alchemy.fields import FieldID
from tagstudio.core.library.alchemy.library import Library, LibraryStatus
from tagstudio.core.library.alchemy.models import Entry
from tagstudio.core.library.alchemy.models import Entry, Tag
from tagstudio.core.library.ignore import Ignore
from tagstudio.core.library.refresh import RefreshTracker
from tagstudio.core.macro_parser import (
DataResult,
FieldResult,
TagResult,
parse_macro_file,
)
from tagstudio.core.media_types import MediaCategories
from tagstudio.core.query_lang.util import ParsingError
from tagstudio.core.ts_core import TagStudioCore
from tagstudio.core.utils.str_formatting import strip_web_protocol
from tagstudio.core.utils.types import unwrap
from tagstudio.qt.cache_manager import CacheManager
from tagstudio.qt.controllers.ffmpeg_missing_message_box import FfmpegMissingMessageBox
@@ -308,10 +318,20 @@ class QtDriver(DriverMixin, QObject):
pal: QPalette = self.app.palette()
pal.setColor(QPalette.ColorGroup.Normal, QPalette.ColorRole.Window, QColor("#1e1e1e"))
pal.setColor(QPalette.ColorGroup.Normal, QPalette.ColorRole.Button, QColor("#1e1e1e"))
pal.setColor(QPalette.ColorGroup.Inactive, QPalette.ColorRole.Window, QColor("#232323"))
pal.setColor(QPalette.ColorGroup.Inactive, QPalette.ColorRole.Button, QColor("#232323"))
pal.setColor(
QPalette.ColorGroup.Inactive, QPalette.ColorRole.ButtonText, QColor("#666666")
QPalette.ColorGroup.Inactive,
QPalette.ColorRole.Window,
QColor("#232323"),
)
pal.setColor(
QPalette.ColorGroup.Inactive,
QPalette.ColorRole.Button,
QColor("#232323"),
)
pal.setColor(
QPalette.ColorGroup.Inactive,
QPalette.ColorRole.ButtonText,
QColor("#666666"),
)
self.app.setPalette(pal)
@@ -534,6 +554,21 @@ class QtDriver(DriverMixin, QObject):
# endregion
# region Macros Menu ==========================================================
self.main_window.menu_bar.test_macro_1_action.triggered.connect(
lambda: (
self.run_macros(self.main_window.menu_bar.test_macro_1, self.selected),
# self.main_window.preview_panel.update_widgets(update_preview=False),
)
)
self.main_window.menu_bar.test_macro_2_action.triggered.connect(
lambda: (
self.run_macros(self.main_window.menu_bar.test_macro_2, self.selected),
# self.preview_panel.update_widgets(update_preview=False),
)
)
def create_folders_tags_modal():
if not hasattr(self, "folders_modal"):
self.folders_modal = FoldersToTagsModal(self.lib, self)
@@ -782,7 +817,8 @@ class QtDriver(DriverMixin, QObject):
end_time = time.time()
self.main_window.status_bar.showMessage(
Translations.format(
"status.library_closed", time_span=format_timespan(end_time - start_time)
"status.library_closed",
time_span=format_timespan(end_time - start_time),
)
)
@@ -1075,56 +1111,85 @@ class QtDriver(DriverMixin, QObject):
# # self.run_macro('autofill', id)
yield 0
def run_macros(self, name: MacroID, entry_ids: list[int]):
def run_macros(self, macro_name: str, entry_ids: list[int]):
"""Run a specific Macro on a group of given entry_ids."""
for entry_id in entry_ids:
self.run_macro(name, entry_id)
self.run_macro(macro_name, entry_id)
def run_macro(self, name: MacroID, entry_id: int):
def run_macro(self, macro_name: str, entry_id: int):
"""Run a specific Macro on an Entry given a Macro name."""
entry: Entry = self.lib.get_entry(entry_id)
if not self.lib.library_dir:
logger.error("[QtDriver] Can't run macro when no library is open!")
return
entry: Entry | None = self.lib.get_entry(entry_id)
if not entry:
logger.error(f"[QtDriver] No Entry given ID {entry_id}!")
return
full_path = self.lib.library_dir / entry.path
source = "" if entry.path.parent == Path(".") else entry.path.parts[0].lower()
# macro_path = Path(
# self.lib.library_dir / TS_FOLDER_NAME / MACROS_FOLDER_NAME / f"{macro_name}.toml"
# )
macro_path = Path(self.lib.library_dir / TS_FOLDER_NAME / MACROS_FOLDER_NAME / macro_name)
logger.info(
"running macro",
source=source,
macro=name,
"[QtDriver] Running Macro",
macro_path=macro_name,
entry_id=entry.id,
grid_idx=entry_id,
)
if name == MacroID.AUTOFILL:
for macro_id in MacroID:
if macro_id == MacroID.AUTOFILL:
continue
self.run_macro(macro_id, entry_id)
results: list[DataResult] = parse_macro_file(macro_path, full_path)
for result in results:
if isinstance(result, TagResult):
tag_ids: set[int] = set()
for string in result.tag_strings:
if not string.strip():
continue
# NOTE: The following code overlaps with update_tags() in tag_search.py
# Sort and prioritize the results
tag_results: list[set[Tag]] = self.lib.search_tags(name=string, limit=-1)
results_0 = list(tag_results[0])
results_0.sort(key=lambda tag: tag.name.lower())
results_1 = list(tag_results[1])
results_1.sort(key=lambda tag: tag.name.lower())
raw_results = list(results_0 + results_1)
priority_results: set[Tag] = set()
elif name == MacroID.SIDECAR:
parsed_items = TagStudioCore.get_gdl_sidecar(full_path, source)
for field_id, value in parsed_items.items():
if isinstance(value, list) and len(value) > 0 and isinstance(value[0], str):
value = self.lib.tag_from_strings(value)
for tag in raw_results:
if (
tag.name.lower().startswith(string.strip().lower())
and tag not in priority_results
):
priority_results.add(tag)
all_results = sorted(list(priority_results), key=lambda tag: len(tag.name)) + [
r for r in raw_results if r not in priority_results
]
final_tag: Tag | None = None
if len(all_results) > 0:
final_tag = all_results[0]
# tag = self.lib.get_tag_by_name(string)
if final_tag:
tag_ids.add(final_tag.id)
if not tag_ids:
continue
self.lib.add_tags_to_entries(entry_id, tag_ids)
elif isinstance(result, FieldResult):
self.lib.add_field_to_entry(
entry.id,
field_id=field_id,
value=value,
entry_id,
field_id=result.name,
value=result.content,
skip_on_exists=True,
)
elif name == MacroID.BUILD_URL:
url = TagStudioCore.build_url(entry, source)
if url is not None:
self.lib.add_field_to_entry(entry.id, field_id=FieldID.SOURCE, value=url)
elif name == MacroID.MATCH:
TagStudioCore.match_conditions(self.lib, entry.id)
elif name == MacroID.CLEAN_URL:
for field in entry.text_fields:
if field.type.type == FieldTypeEnum.TEXT_LINE and field.value:
self.lib.update_entry_field(
entry_ids=entry.id,
field=field,
content=strip_web_protocol(field.value),
)
@property
def sorting_direction(self) -> bool:
"""Whether to Sort the results in ascending order."""
return self.main_window.sorting_direction_combobox.currentData()
def sorting_direction_callback(self):
logger.info("Sorting Direction Changed", ascending=self.main_window.sorting_direction)
@@ -1248,6 +1313,9 @@ class QtDriver(DriverMixin, QObject):
self.main_window.preview_panel.set_selection(self.selected)
def set_macro_menu_viability(self):
self.main_window.menu_bar.test_macro_1_action.setDisabled(not self.selected)
def set_clipboard_menu_viability(self):
if len(self.selected) == 1:
self.main_window.menu_bar.copy_fields_action.setEnabled(True)
@@ -1280,7 +1348,8 @@ class QtDriver(DriverMixin, QObject):
def update_completions_list(self, text: str) -> None:
matches = re.search(
r"((?:.* )?)(mediatype|filetype|path|tag|tag_id):(\"?[A-Za-z0-9\ \t]+\"?)?", text
r"((?:.* )?)(mediatype|filetype|path|tag|tag_id):(\"?[A-Za-z0-9\ \t]+\"?)?",
text,
)
completion_list: list[str] = []
@@ -1551,7 +1620,10 @@ class QtDriver(DriverMixin, QObject):
except Exception as e:
logger.error(e)
open_status = LibraryStatus(
success=False, library_path=path, message=type(e).__name__, msg_description=str(e)
success=False,
library_path=path,
message=type(e).__name__,
msg_description=str(e),
)
self.cache_manager = CacheManager(
path,

View File

@@ -381,6 +381,14 @@ class MainMenuBar(QMenuBar):
def setup_macros_menu(self):
self.macros_menu = QMenu(Translations["menu.macros"], self)
self.test_macro_1 = "import_metadata.toml"
self.test_macro_1_action = QAction(self.test_macro_1, self)
self.macros_menu.addAction(self.test_macro_1_action)
self.test_macro_2 = "conditionals.toml"
self.test_macro_2_action = QAction(self.test_macro_2, self)
self.macros_menu.addAction(self.test_macro_2_action)
self.folders_to_tags_action = QAction(Translations["menu.macros.folders_to_tags"], self)
self.folders_to_tags_action.setEnabled(False)
self.macros_menu.addAction(self.folders_to_tags_action)

View File

@@ -64,6 +64,7 @@ class PreviewPanelView(QWidget):
super().__init__()
self.lib = library
self._selected = []
self.__thumb = PreviewThumb(self.lib, driver)
self.__file_attrs = FileAttributes(self.lib, driver)
self._fields = FieldContainers(
@@ -132,7 +133,7 @@ class PreviewPanelView(QWidget):
def _set_selection_callback(self):
raise NotImplementedError()
def set_selection(self, selected: list[int], update_preview: bool = True):
def set_selection(self, selected: list[int] | None = None, update_preview: bool = True):
"""Render the panel widgets with the newest data from the Library.
Args:
@@ -140,10 +141,10 @@ class PreviewPanelView(QWidget):
update_preview (bool): Should the file preview be updated?
(Only works with one or more items selected)
"""
self._selected = selected
self._selected = selected or []
try:
# No Items Selected
if len(selected) == 0:
if len(self._selected) == 0:
self.__thumb.hide_preview()
self.__file_attrs.update_stats()
self.__file_attrs.update_date_label()
@@ -152,8 +153,8 @@ class PreviewPanelView(QWidget):
self.add_buttons_enabled = False
# One Item Selected
elif len(selected) == 1:
entry_id = selected[0]
elif len(self._selected) == 1:
entry_id = self._selected[0]
entry: Entry = unwrap(self.lib.get_entry(entry_id))
filepath: Path = unwrap(self.lib.library_dir) / entry.path
@@ -169,10 +170,10 @@ class PreviewPanelView(QWidget):
self.add_buttons_enabled = True
# Multiple Selected Items
elif len(selected) > 1:
elif len(self._selected) > 1:
# items: list[Entry] = [self.lib.get_entry_full(x) for x in self.driver.selected]
self.__thumb.hide_preview() # TODO: Render mixed selection
self.__file_attrs.update_multi_selection(len(selected))
self.__file_attrs.update_multi_selection(len(self._selected))
self.__file_attrs.update_date_label()
self._fields.hide_containers() # TODO: Allow for mixed editing