mirror of
https://github.com/TagStudioDev/TagStudio.git
synced 2026-01-29 06:10:51 +00:00
Compare commits
16 Commits
57849bf4d5
...
macros
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b141736213 | ||
|
|
d34361be46 | ||
|
|
7cf769c5ed | ||
|
|
1110f64ff5 | ||
|
|
119b964b16 | ||
|
|
ff6d13ca30 | ||
|
|
164c58d1c9 | ||
|
|
4a60637202 | ||
|
|
4675bed373 | ||
|
|
97136ee442 | ||
|
|
25f421bca4 | ||
|
|
3221aafdfc | ||
|
|
5384f308ac | ||
|
|
9b625b07a3 | ||
|
|
4de7893c19 | ||
|
|
20d641d6f3 |
523
docs/macros.md
523
docs/macros.md
@@ -2,49 +2,520 @@
|
||||
icon: material/script-text
|
||||
---
|
||||
|
||||
# :material-script-text: Tools & Macros
|
||||
# :material-script-text: Macros
|
||||
|
||||
Tools and macros are features that serve to create a more fluid [library](libraries.md)-managing process, or provide some extra functionality. Please note that some are still in active development and will be fleshed out in future updates.
|
||||
TagStudio features a configurable macro system which allows you to set up automatic or manually triggered actions to perform a wide array of operations on your [library](libraries.md). Each macro is stored in an individual script file and is created using [TOML](https://toml.io/en/) with a predefined schema described below. Macro files are stored in your library's "`.TagStudio/macros`" folder.
|
||||
|
||||
## Tools
|
||||
## Schema Version
|
||||
|
||||
### Fix Unlinked Entries
|
||||
The `schema_version` key declares which version of the macro schema is currently being used. Current schema version: 1.
|
||||
|
||||
This tool displays the number of unlinked [entries](entries.md), and some options for their resolution.
|
||||
```toml
|
||||
schema_version = 1
|
||||
```
|
||||
|
||||
Refresh
|
||||
: Scans through the library and updates the unlinked entry count.
|
||||
## Triggers
|
||||
|
||||
Search & Relink
|
||||
: Attempts to automatically find and reassign missing files.
|
||||
The `triggers` key declares when a macro may be automatically ran. Macros can still be manually triggered even if they have automatic triggers defined.
|
||||
|
||||
Delete Unlinked Entries
|
||||
: Displays a confirmation prompt containing the list of all missing files to be deleted before committing to or cancelling the operation.
|
||||
- `on_open`: Run when the TagStudio library is opened.
|
||||
- `on_refresh`: Run when the TagStudio library's directories have been refreshed.
|
||||
- `on_new_entry`: Run a new [file entry](entries.md) that has been created.
|
||||
|
||||
### Fix Duplicate Files
|
||||
```toml
|
||||
triggers = ["on_new_entry"]
|
||||
```
|
||||
|
||||
This tool allows for management of duplicate files in the library using a [DupeGuru](https://dupeguru.voltaicideas.net/) file.
|
||||
## Actions
|
||||
|
||||
Load DupeGuru File
|
||||
: load the "results" file created from a DupeGuru scan
|
||||
Actions are broad categories of operations that your macro will perform. They are represented by TOML tables and must have a unique name in your macro file, but the name itself has no importance to the macro. A single macro file can contain multiple actions and each action can contain multiple tasks.
|
||||
|
||||
Mirror Entries
|
||||
: Duplicate entries will have their contents mirrored across all instances. This allows for duplicate files to then be deleted with DupeGuru as desired, without losing the [field](fields.md) data that has been assigned to either. (Once deleted, the "Fix Unlinked Entries" tool can be used to clean up the duplicates)
|
||||
An action table with a name of your choosing (e.g. `[action]`) will contain the general configuration for your action, and nested task tables (e.g. `[action.task]`) will define the specifics of your action's tasks.
|
||||
|
||||
### Create Collage
|
||||
Action tables must have an `action` key with one of the following valid action values:
|
||||
|
||||
This tool is a preview of an upcoming feature. When selected, TagStudio will generate a collage of all the contents in a Library, which can be found in the Library folder ("/your-folder/.TagStudio/collages/"). Note that this feature is still in early development, and doesn't yet offer any customization options.
|
||||
- [`import_data`](#import-data): Import data from a supported external source.
|
||||
- [`add_data`](#add-data): Add data declared inside the macro file.
|
||||
|
||||
## Macros
|
||||
```toml
|
||||
[newgrounds]
|
||||
action = "import_data"
|
||||
```
|
||||
|
||||
### Auto-fill [WIP]
|
||||
Most of the configuration of actions comes at the [task configuration](#task-configuration) level. This is where you will build out exactly how your action will translate data and instructions into results for your TagStudio library.
|
||||
|
||||
Tool is in development and will be documented in a future update.
|
||||
---
|
||||
|
||||
### Sort fields
|
||||
### Add Data
|
||||
|
||||
Tool is in development. Will allow for user-defined sorting of [fields](fields.md).
|
||||
The `add_data` action lets you add data to a [file entry](entries.md) given one or more conditional statements. Unlike the [`import_data`](#import-data) action, the `add_data` action adds data declared in the macro itself rather than importing it form a source external to the macro.
|
||||
|
||||
### Folders to Tags
|
||||
Compatible Keys:
|
||||
|
||||
Creates tags from the existing folder structure in the library, which are previewed in a hierarchy view for the user to confirm. A tag will be created for each folder and applied to all entries, with each subfolder being linked to the parent folder as a [parent tag](tags.md#parent-tags). Tags will initially be named after the folders, but can be fully edited and customized afterwards.
|
||||
- [`source_filters`](#source_filters)
|
||||
- [`value`](#value)
|
||||
|
||||
---
|
||||
|
||||
### Import Data
|
||||
|
||||
The `import_data` action allows you to import external data into your TagStudio library in the form of [tags](tags.md) and [fields](fields.md). While some sources need explicit support (e.g. ID3, EXIF) generic sources such as JSON sidecar files can leverage a wide array of data shaping options that allow the underlying data structure to be abstracted from TagStudio's internal data structures. This macro pairs very well with tools that download sidecar files for data such as [gallery-dl](https://github.com/mikf/gallery-dl).
|
||||
|
||||
Compatible Keys:
|
||||
|
||||
- [`key`](#key)
|
||||
- [`source_location`](#source_location)
|
||||
- [`source_format`](#source_format)
|
||||
- [`is_embedded`](#is_embedded)
|
||||
|
||||
If you're importing from an object-like source (e.g. JSON), you'll need to create a nested task table with the format `[action.task]` and provide a [`key`](#key) field filled with the name of the targeted source key. In this case the task name does not matter as long as it doesn't conflict with one of the built-in task names (i.e. "`map`", "`inverse_map`, "`template`").
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
=== "Importable JSON Data"
|
||||
```json
|
||||
{
|
||||
"newgrounds": {
|
||||
"tags": ["tag1", "tag2"]
|
||||
}
|
||||
}
|
||||
```
|
||||
=== "TOML Macro"
|
||||
```toml
|
||||
[newgrounds]
|
||||
action="import_data"
|
||||
[newgrounds.tags]
|
||||
key="tags"
|
||||
```
|
||||
|
||||
Inside the new table we can now declare additional information about the native data formats and how they should be imported into TagStudio.
|
||||
|
||||
---
|
||||
|
||||
### Action Configuration
|
||||
|
||||
#### `source_format`
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
!!! note ""
|
||||
Compatible Actions: [`import_data`](#import-data)
|
||||
|
||||
The `source_format` key is used to declare what type of source data will be imported from.
|
||||
|
||||
```toml
|
||||
[newgrounds]
|
||||
action = "import_data"
|
||||
source_format = "json"
|
||||
```
|
||||
|
||||
- `exif`: Embedded EXIF metadata
|
||||
- `id3`: Embedded ID3 metadata
|
||||
- `json`: A JSON formatted file
|
||||
- `text`: A plaintext file
|
||||
- `xml`: An XML formatted file
|
||||
- `xmp`: Embedded XMP metadata or an XMP sidecar file
|
||||
|
||||
---
|
||||
|
||||
#### `source_location`
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
!!! note ""
|
||||
Compatible Actions: [`import_data`](#import-data)
|
||||
|
||||
The `source_location` key is used to declare where the metadata should be imported from. This can be a relative or absolute path, and can reference the targeted filename with the `{filename}` placeholder.
|
||||
|
||||
```toml
|
||||
[newgrounds]
|
||||
action = "import_data"
|
||||
source_format = "json"
|
||||
source_location = "{filename}.json" # Relative sidecar file
|
||||
```
|
||||
|
||||
<!-- - `absolute`: An absolute file location
|
||||
- `embedded`: Data that's embedded within the targeted file
|
||||
- `sidecar`: A sidecar file with a relative file location -->
|
||||
|
||||
---
|
||||
|
||||
#### `is_embedded`
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
!!! note ""
|
||||
Compatible Actions: [`import_data`](#import-data)
|
||||
|
||||
If targeting embedded data, add the `is_embedded` key and set it to `true`. If no `source_location` is used then the file this macro is targeting will be used as a source.
|
||||
|
||||
```toml
|
||||
[newgrounds]
|
||||
action = "import_data"
|
||||
source_format = "id3"
|
||||
is_embedded = true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### `source_filters`
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
!!! note ""
|
||||
Compatible Actions: [`add_data`](#add-data), [`import_data`](#import-data)
|
||||
|
||||
`source_filters` are used to declare a glob list of files that are able to be targeted by this action. An entry filepath only needs to fall under one of the given source filters in order for the macro to continue. If not, then the macro will be skipped for this file entry.
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
=== "import_data"
|
||||
```toml
|
||||
[newgrounds]
|
||||
action = "import_data"
|
||||
source_format = "json"
|
||||
source_location = "{filename}.json"
|
||||
source_filters = ["**/Newgrounds/**"]
|
||||
```
|
||||
=== "add_data"
|
||||
```toml
|
||||
[animated]
|
||||
action = "add_data"
|
||||
source_filters = ["**/*.gif", "**/*.apng"]
|
||||
```
|
||||
|
||||
<!-- ### Source Types
|
||||
|
||||
The `source_type` key allows for the explicit declaration of the type and/or format of the source data. When this key is omitted, TagStudio will default to the data type that makes the most sense for the destination [TagStudio type](#tagstudio-types).
|
||||
|
||||
- `string`: A character string (text)
|
||||
- `integer`: An integer
|
||||
- `float`: A floating point number
|
||||
- `url`: A string with a special URL formatting pass
|
||||
- [`ISO8601`](https://en.wikipedia.org/wiki/ISO_8601) A standard datetime format
|
||||
- `list:string`: List of strings (text)
|
||||
- `list:integer`: List of integers
|
||||
- `list:float`: List of floating point numbers -->
|
||||
|
||||
---
|
||||
|
||||
## Task Configuration
|
||||
|
||||
An [action's](#actions) tasks need to be configured using the built-in keys available to each action. These keys may be specific to certain actions, required or optional, or expect other specific formatting. The actions section will list each action's available keys, and the following list of keys will likewise list which actions they are compatible with along with any other rules.
|
||||
|
||||
Along with generally defining your own custom tasks, there are a few built-in tasks that have reserved names and offer extra functionality on top of your own tasks. These currently include:
|
||||
|
||||
- [`.inverse-map`](#many-to-1-inverse-map) (Inverse Tag Maps)
|
||||
- [`.map`](#manual-tag-mapping) (Tag Maps)
|
||||
- [`.template`](#templates) (Templates)
|
||||
|
||||
---
|
||||
|
||||
### `key`
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
!!! note ""
|
||||
Compatible Actions: [`import_data`](#import-data)
|
||||
|
||||
The `key` key is used to specify the object key to target in your data source. If you're targeting a nested object, separate the names of the keys with a dot.
|
||||
|
||||
```toml
|
||||
[artstation]
|
||||
action = "import_data"
|
||||
source_format = "json"
|
||||
[artstation.tags]
|
||||
key="tags"
|
||||
ts_type = "tags"
|
||||
[artstation.mediums]
|
||||
key="mediums.name" # Nested key
|
||||
ts_type = "tags"
|
||||
```
|
||||
|
||||
When importing from the same key multiple times, you have the option to either choose different names for your task tables or use the same name with these tables wrapped in an extra pair of brackets.
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
=== "Single Import"
|
||||
```toml
|
||||
[newgrounds]
|
||||
# Newgrounds table info here
|
||||
[newgrounds.artist]
|
||||
key="artist"
|
||||
ts_type = "tags"
|
||||
use_context = false
|
||||
on_missing = "create"
|
||||
```
|
||||
=== "Multiple Imports"
|
||||
```toml
|
||||
[newgrounds]
|
||||
# Newgrounds table info here
|
||||
[newgrounds.artist_tag]
|
||||
key="artist"
|
||||
ts_type = "tags"
|
||||
use_context = false
|
||||
on_missing = "skip"
|
||||
[newgrounds.artist_text]
|
||||
key="artist"
|
||||
ts_type = "text_line"
|
||||
name = "Artist"
|
||||
```
|
||||
=== "Multiple Imports (Wrapped)"
|
||||
```toml
|
||||
[newgrounds]
|
||||
# Newgrounds table info here
|
||||
[[newgrounds.artist]]
|
||||
key="artist"
|
||||
ts_type = "tags"
|
||||
use_context = false
|
||||
on_missing = "skip"
|
||||
[[newgrounds.artist]]
|
||||
key="artist"
|
||||
ts_type = "text_line"
|
||||
name = "Artist"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `ts_type`
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
!!! note ""
|
||||
Compatible Actions: [`add_data`](#add-data), [`import_data`](#import-data)
|
||||
|
||||
The required `ts_type` key defines the destination data format inside TagStudio itself. This can be [tags](tags.md) or any [field](fields.md) type.
|
||||
|
||||
- [`tags`](tags.md)
|
||||
- [`text_line`](fields.md#text-line)
|
||||
- [`text_box`](fields.md#text-box)
|
||||
- [`datetime`](fields.md#datetime)
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
=== "Title Field"
|
||||
```toml
|
||||
[newgrounds]
|
||||
# newgrounds table info here
|
||||
[newgrounds.title]
|
||||
ts_type = "text_line"
|
||||
name = "Title"
|
||||
```
|
||||
=== "Tags"
|
||||
```toml
|
||||
[newgrounds]
|
||||
# newgrounds table info here
|
||||
[newgrounds.tags]
|
||||
ts_type = "tags"
|
||||
```
|
||||
|
||||
#### Field Specific Keys
|
||||
|
||||
`name`: The name of the field to import into.
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
=== "text_line"
|
||||
```toml
|
||||
[newgrounds.user]
|
||||
key="user"
|
||||
ts_type = "text_line"
|
||||
name = "Author"
|
||||
```
|
||||
=== "text_box"
|
||||
```toml
|
||||
[newgrounds.content]
|
||||
key="content"
|
||||
ts_type = "text_box"
|
||||
name = "Description"
|
||||
```
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
!!! note
|
||||
As of writing (v9.5.3) TagStudio fields still do not allow for custom names. The macro system is designed to be forward-thinking with this feature in mind, however only existing TagStudio field names are currently considered valid. Any invalid field names will default to the "Notes" field.
|
||||
|
||||
#### Tag Specific Keys
|
||||
|
||||
Since TagStudio tags are more complex than other traditional tag formats, there are several options for fine-tuning how tags should be imported.
|
||||
|
||||
`delimiter`: The delimiter between string tags to use.
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
=== "Comma + Space Separation"
|
||||
```toml
|
||||
[newgrounds.tags]
|
||||
ts_type = "tags"
|
||||
delimiter = ", "
|
||||
```
|
||||
=== "Newline Separation"
|
||||
```toml
|
||||
[newgrounds.tags]
|
||||
ts_type = "tags"
|
||||
delimiter = "\n"
|
||||
```
|
||||
|
||||
`on_missing`: Determines the behavior of how to react to source tags with no match in the library.
|
||||
|
||||
- `"prompt"`: Ask the user if they wish to create, skip, or manually choose an existing tag.
|
||||
- `"create"`: Automatically create a new TagStudio tag based on the source tag.
|
||||
- `"skip"` (Default): Ignore the unmatched tags.
|
||||
|
||||
```toml
|
||||
[newgrounds.tags]
|
||||
ts_type = "tags"
|
||||
strict = false
|
||||
use_context = true
|
||||
on_missing = "create"
|
||||
```
|
||||
|
||||
`prefix`: An optional prefix to remove.
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
!!! example
|
||||
Given a list of tags such as `["#tag1", "#tag2", "#tag3"]`, you may wish to remove the "`#`" prefix.
|
||||
|
||||
```toml
|
||||
[instagram.tags]
|
||||
ts_type = "tags"
|
||||
prefix = "#"
|
||||
```
|
||||
|
||||
`strict`: A flag that determines what [names](tags.md#naming-tags) of the TagStudio tags should be used to compare against the source data when matching.
|
||||
|
||||
- `true`: Only match against the TagStudio tag [name](tags.md#name) field.
|
||||
- `false` (Default): Match against any TagStudio tag name field including [shorthands](tags.md#shorthand), [aliases](tags.md#aliases), and the [disambiguation name](tags.md#disambiguation).
|
||||
|
||||
`use_context`: A flag that determines if TagStudio should use context clues from other source tags to provide more accurate tag matches.
|
||||
|
||||
- `true` (Default): Use context clue matching (slower, less ambiguous).
|
||||
- `false`: Ignore surrounding source tags (faster, more ambiguous).
|
||||
\*\*
|
||||
|
||||
---
|
||||
|
||||
### `value`
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
!!! note ""
|
||||
Compatible Actions: [`add_data`](#add-data)
|
||||
|
||||
The `value` key is use specifically with the [`add_data`](#add-data) action to define what value should be added to the file entry.
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
=== "Title Field"
|
||||
```toml
|
||||
[animated]
|
||||
action = "add_data"
|
||||
source_filters = ["**/*.gif", "**/*.apng"]
|
||||
[animated.title]
|
||||
ts_type = "text_line"
|
||||
name = "Title"
|
||||
value = "Animated Image"
|
||||
```
|
||||
=== "Tags"
|
||||
```toml
|
||||
[animated]
|
||||
action = "add_data"
|
||||
source_filters = ["**/*.gif", "**/*.apng"]
|
||||
[animated.tags]
|
||||
ts_type = "tags"
|
||||
value = ["Animated"]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Manual Tag Mapping
|
||||
|
||||
If the automatic tag matching system isn't enough to import tags the way you'd like, you can manually specify mappings between source and destination tags. Tables with the `.map` or `.inverse_map` task suffixes will be used to map tags in the nearest scope.
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
=== "Global Scope"
|
||||
```toml
|
||||
# Applies to all actions in the macro file
|
||||
[map]
|
||||
```
|
||||
=== "Action Scope"
|
||||
```toml
|
||||
# Applies to all tasks in the "newgrounds" action
|
||||
[newgrounds.map]
|
||||
```
|
||||
=== "Key Scope"
|
||||
```toml
|
||||
# Only applies to the "ratings" task inside the "newgrounds" action
|
||||
[newgrounds.ratings.map]
|
||||
```
|
||||
|
||||
- `map`: Used for "[1 to 0](#1-to-0-ignore-matches)", "[1 to 1](#1-to-1)", and "[1 to many](#1-to-many)" mappings
|
||||
- `inverse_map`: Used for "[many to 1](#many-to-1-inverse-map)" mappings
|
||||
|
||||
---
|
||||
|
||||
#### 1 to 0 (Ignore Matches)
|
||||
|
||||
By mapping the key of the source tag name to an empty string, you can ignore that tag when matching with your own tags. This is useful if you're importing from a source that uses tags you don't wish to use or create inside your own libraries.
|
||||
|
||||
```toml
|
||||
[newgrounds.tags.map]
|
||||
# Source Tag Name = Nothing, Ignore Matches
|
||||
favorite = ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### 1 to 1
|
||||
|
||||
By mapping the key or quoted string of a source tag to one of your TagStudio tags, you can directly specify a destination tag while bypassing the matching algorithm.
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
!!! tip
|
||||
Consider using tag [aliases](tags.md#aliases) instead of 1 to 1 mapping. This mapping technique is useful if you want to map a specific source tag to a destination tag that you otherwise don't consider to be an alternate name for the destination tag.
|
||||
|
||||
```toml
|
||||
[newgrounds.tags.map]
|
||||
# Source Tag Name = TagStudio Tag Name
|
||||
colored_pencil = "Drawing"
|
||||
"Colored Pencil" = "Drawing"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### 1 to Many
|
||||
|
||||
By mapping the key or quoted string of a source tag to a **list of your TagStudio tags**, you can cause one source tag to import as more than one of your TagStudio tags.
|
||||
|
||||
```toml
|
||||
[newgrounds.tags.map]
|
||||
# Source Tag Name = List of TagStudio Tag Names
|
||||
drawing = ["Drawing (2D)", "Image (Meta Tags)"]
|
||||
video = ["Animation (2D)", "Animated (Meta Tags)"]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Many to 1 (Inverse Map)
|
||||
|
||||
By mapping the key or quoted string of one of your TagStudio tags to a **list of source tags**, you can declare a combination of required source tags that result in a wholly new matched TagStudio tag. This is useful if you use a single tag in your TagStudio library that is represented by multiple separate tags from your source.
|
||||
|
||||
```toml
|
||||
[newgrounds.tags.inverse_map]
|
||||
# TagStudio Tag Name = List of Source Tag Names
|
||||
"Animation (2D)" = ["drawing", "video"]
|
||||
"Animation (3D)" = ["3D", "video"]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Templates
|
||||
|
||||
Templates are part of the `input_data` action and allow you to take data from one or more keys of a source and combine them into a single value. Template sub-action tables must begin with the action name and end with `.template` (e.g. `[action.template]`). Source object keys can be embedded in a string value if surrounded by curly braces (`{}`). Nested keys are accessed by separating the keys with a dot (e.g. `{key.nested_key}`).
|
||||
|
||||
<!-- prettier-ignore-start -->
|
||||
=== "Composite Template"
|
||||
```toml
|
||||
[bluesky.template]
|
||||
template = "https://www.bsky.app/profile/{author.handle}/post/{post_id}"
|
||||
ts_type = "text_line"
|
||||
name = "Source"
|
||||
```
|
||||
=== "Multiple Templates per Action"
|
||||
```toml
|
||||
[[artstation.template]]
|
||||
template = "Original Tags: {tags}"
|
||||
ts_type = "text_box"
|
||||
name = "Notes"
|
||||
|
||||
[[artstation.template]]
|
||||
template = "Original Mediums: {mediums}"
|
||||
ts_type = "text_box"
|
||||
name = "Notes"
|
||||
```
|
||||
<!-- prettier-ignore-end -->
|
||||
|
||||
34
docs/relinking.md
Normal file
34
docs/relinking.md
Normal file
@@ -0,0 +1,34 @@
|
||||
---
|
||||
title: Entry Relinking
|
||||
icon: material/link-variant
|
||||
---
|
||||
|
||||
# :material-link-variant: Entry Relinking
|
||||
|
||||
### Fix Unlinked Entries
|
||||
|
||||
This tool displays the number of unlinked [entries](entries.md), and some options for their resolution.
|
||||
|
||||
Refresh
|
||||
|
||||
- Scans through the library and updates the unlinked entry count.
|
||||
|
||||
Search & Relink
|
||||
|
||||
- Attempts to automatically find and reassign missing files.
|
||||
|
||||
Delete Unlinked Entries
|
||||
|
||||
- Displays a confirmation prompt containing the list of all missing files to be deleted before committing to or cancelling the operation.
|
||||
|
||||
### Fix Duplicate Files
|
||||
|
||||
This tool allows for management of duplicate files in the library using a [DupeGuru](https://dupeguru.voltaicideas.net/) file.
|
||||
|
||||
Load DupeGuru File
|
||||
|
||||
- load the "results" file created from a DupeGuru scan
|
||||
|
||||
Mirror Entries
|
||||
|
||||
- Duplicate entries will have their contents mirrored across all instances. This allows for duplicate files to then be deleted with DupeGuru as desired, without losing the [field](fields.md) data that has been assigned to either. (Once deleted, the "Fix Unlinked Entries" tool can be used to clean up the duplicates)
|
||||
@@ -243,18 +243,27 @@ Discrete library objects representing [attributes](<https://en.wikipedia.org/wik
|
||||
- [ ] OCR Search :material-chevron-up:{ .priority-low title="Low Priority" }
|
||||
- [ ] Fuzzy Search :material-chevron-up:{ .priority-low title="Low Priority" }
|
||||
|
||||
### :material-file-cog: [Macros](macros.md)
|
||||
### :material-script-text: [Macros](macros.md)
|
||||
|
||||
- [ ] Standard, Human Readable Format (TOML) :material-chevron-triple-up:{ .priority-high title="High Priority" } **[v9.5.x]**
|
||||
- [ ] Versioning System :material-chevron-triple-up:{ .priority-high title="High Priority" } **[v9.5.x]**
|
||||
- [ ] Triggers **[v9.5.x]**
|
||||
- [ ] # Triggers **[v9.5.x]**
|
||||
- [x] Standard, Human Readable Format (TOML) :material-chevron-triple-up:{ .priority-high title="High Priority" } **[v9.5.5]**
|
||||
- [x] Versioning System :material-chevron-triple-up:{ .priority-high title="High Priority" } **[v9.5.5]**
|
||||
- [ ] Triggers **[v9.5.5]**
|
||||
- [ ] On File Added :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [ ] On Library Refresh :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [ ] [...]
|
||||
- [ ] Actions **[v9.5.x]**
|
||||
- [ ] Add Tag(s) :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [ ] Add Field(s) :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [ ] Set Field Content :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [x] Import from JSON file :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [ ] Import from plaintext file :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [ ] Import from XML file :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [x] Create templated fields from other table keys :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [x] Remove tag prefixes from import sources :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [x] Specify tag delimiters from import sources :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [x] Add data (tags + fields) configured in macro :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [x] Glob filter for entry file :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [x] Map source tags to TagStudio tags :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [ ] [...]
|
||||
|
||||
### :material-table-arrow-right: Sharable Data
|
||||
@@ -274,9 +283,9 @@ Packs are intended as an easy way to import and export specific data between lib
|
||||
- [ ] UUIDs + Namespaces :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [ ] Standard, Human Readable Format (TOML) :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [ ] Versioning System :material-chevron-double-up:{ .priority-med title="Medium Priority" }
|
||||
- [ ] Macro Sharing :material-chevron-triple-up:{ .priority-high title="High Priority" } **[v9.5.x]**
|
||||
- [ ] Importable :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [ ] Exportable :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [x] Macro Sharing :material-chevron-triple-up:{ .priority-high title="High Priority" } **[v9.5.x]**
|
||||
- [x] Importable :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [x] Exportable :material-chevron-triple-up:{ .priority-high title="High Priority" }
|
||||
- [ ] Sharable Entry Data :material-chevron-double-up:{ .priority-med title="Medium Priority" } **[v9.9.x]**
|
||||
- _Specifics of this are yet to be determined_
|
||||
- [ ] Export Library to Human Readable Format :material-chevron-triple-up:{ .priority-high title="High Priority" } **[v10.0.0]**
|
||||
|
||||
@@ -43,8 +43,10 @@ nav:
|
||||
- entries.md
|
||||
- preview-support.md
|
||||
- search.md
|
||||
- ignore.md
|
||||
- macros.md
|
||||
- Management:
|
||||
- relinking.md
|
||||
- ignore.md
|
||||
- Fields:
|
||||
- fields.md
|
||||
- Tags:
|
||||
|
||||
@@ -10,6 +10,7 @@ TS_FOLDER_NAME: str = ".TagStudio"
|
||||
BACKUP_FOLDER_NAME: str = "backups"
|
||||
COLLAGE_FOLDER_NAME: str = "collages"
|
||||
IGNORE_NAME: str = ".ts_ignore"
|
||||
MACROS_FOLDER_NAME: str = "macros"
|
||||
THUMB_CACHE_NAME: str = "thumbs"
|
||||
|
||||
FONT_SAMPLE_TEXT: str = (
|
||||
|
||||
@@ -51,14 +51,6 @@ class OpenStatus(enum.IntEnum):
|
||||
CORRUPTED = 2
|
||||
|
||||
|
||||
class MacroID(enum.Enum):
|
||||
AUTOFILL = "autofill"
|
||||
SIDECAR = "sidecar"
|
||||
BUILD_URL = "build_url"
|
||||
MATCH = "match"
|
||||
CLEAN_URL = "clean_url"
|
||||
|
||||
|
||||
class DefaultEnum(enum.Enum):
|
||||
"""Allow saving multiple identical values in property called .default."""
|
||||
|
||||
|
||||
@@ -1062,19 +1062,21 @@ class Library:
|
||||
selectinload(Tag.parent_tags),
|
||||
selectinload(Tag.aliases),
|
||||
)
|
||||
|
||||
if limit > 0:
|
||||
query = query.limit(limit)
|
||||
|
||||
if name:
|
||||
query = query.where(
|
||||
or_(
|
||||
Tag.name.icontains(name),
|
||||
Tag.shorthand.icontains(name),
|
||||
TagAlias.name.icontains(name),
|
||||
Tag.name.istartswith(name),
|
||||
Tag.shorthand.istartswith(name),
|
||||
TagAlias.name.istartswith(name),
|
||||
)
|
||||
)
|
||||
|
||||
direct_tags = set(session.scalars(query))
|
||||
|
||||
ancestor_tag_ids: list[Tag] = []
|
||||
for tag in direct_tags:
|
||||
ancestor_tag_ids.extend(
|
||||
@@ -1092,14 +1094,6 @@ class Library:
|
||||
{at for at in ancestor_tags if at not in direct_tags},
|
||||
]
|
||||
|
||||
logger.info(
|
||||
"searching tags",
|
||||
search=name,
|
||||
limit=limit,
|
||||
statement=str(query),
|
||||
results=len(res),
|
||||
)
|
||||
|
||||
session.expunge_all()
|
||||
|
||||
return res
|
||||
@@ -1256,7 +1250,7 @@ class Library:
|
||||
with Session(self.engine) as session:
|
||||
return {x.key: x for x in session.scalars(select(ValueType)).all()}
|
||||
|
||||
def get_value_type(self, field_key: str) -> ValueType:
|
||||
def get_value_type(self, field_key: str | None) -> ValueType | None:
|
||||
with Session(self.engine) as session:
|
||||
field = unwrap(session.scalar(select(ValueType).where(ValueType.key == field_key)))
|
||||
session.expunge(field)
|
||||
@@ -1269,6 +1263,7 @@ class Library:
|
||||
field: ValueType | None = None,
|
||||
field_id: FieldID | str | None = None,
|
||||
value: str | datetime | None = None,
|
||||
skip_on_exists: bool = False,
|
||||
) -> bool:
|
||||
logger.info(
|
||||
"[Library][add_field_to_entry]",
|
||||
@@ -1285,6 +1280,27 @@ class Library:
|
||||
field_id = field_id.name
|
||||
field = self.get_value_type(unwrap(field_id))
|
||||
|
||||
if not field:
|
||||
logger.error(
|
||||
"[Library] Could not add field to entry, invalid field type.", entry_id=entry_id
|
||||
)
|
||||
return False
|
||||
|
||||
if skip_on_exists:
|
||||
entry = self.get_entry_full(entry_id, with_tags=False)
|
||||
if not entry:
|
||||
logger.exception("[Library] Entry does not exist", entry_id=entry_id)
|
||||
return False
|
||||
for field_ in entry.fields:
|
||||
if field_.value == value and field_.type_key == field_id:
|
||||
logger.info(
|
||||
"[Library] Field value already exists for entry",
|
||||
entry_id=entry_id,
|
||||
value=value,
|
||||
type=field_id,
|
||||
)
|
||||
return False
|
||||
|
||||
field_model: TextField | DatetimeField
|
||||
if field.type in (FieldTypeEnum.TEXT_LINE, FieldTypeEnum.TEXT_BOX):
|
||||
field_model = TextField(
|
||||
@@ -1463,10 +1479,21 @@ class Library:
|
||||
for tag_id in tag_ids_:
|
||||
for entry_id in entry_ids_:
|
||||
try:
|
||||
logger.info(
|
||||
"[Library][add_tags_to_entries] Adding tag to entry...",
|
||||
tag_id=tag_id,
|
||||
entry_id=entry_id,
|
||||
)
|
||||
session.add(TagEntry(tag_id=tag_id, entry_id=entry_id))
|
||||
total_added += 1
|
||||
session.commit()
|
||||
except IntegrityError:
|
||||
except IntegrityError as e:
|
||||
logger.warning(
|
||||
"[Library][add_tags_to_entries] Tag already on entry",
|
||||
warning=e,
|
||||
tag_id=tag_id,
|
||||
entry_id=entry_id,
|
||||
)
|
||||
session.rollback()
|
||||
|
||||
return total_added
|
||||
@@ -1563,6 +1590,7 @@ class Library:
|
||||
|
||||
return tag
|
||||
|
||||
# TODO: Fix and consolidate code with search_tags()
|
||||
def get_tag_by_name(self, tag_name: str) -> Tag | None:
|
||||
with Session(self.engine) as session:
|
||||
statement = (
|
||||
|
||||
544
src/tagstudio/core/macro_parser.py
Normal file
544
src/tagstudio/core/macro_parser.py
Normal file
@@ -0,0 +1,544 @@
|
||||
# Copyright (C) 2025 Travis Abendshien (CyanVoxel).
|
||||
# Licensed under the GPL-3.0 License.
|
||||
# Created for TagStudio: https://github.com/CyanVoxel/TagStudio
|
||||
|
||||
import json
|
||||
from copy import deepcopy
|
||||
from enum import StrEnum
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING, Any, override
|
||||
|
||||
import structlog
|
||||
import toml
|
||||
from wcmatch import glob
|
||||
|
||||
from tagstudio.core.library.alchemy.fields import FieldID
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from tagstudio.core.library.alchemy.library import Library
|
||||
from tagstudio.core.library.alchemy.models import Tag
|
||||
|
||||
|
||||
logger = structlog.get_logger(__name__)
|
||||
|
||||
SCHEMA_VERSION = "schema_version"
|
||||
TRIGGERS = "triggers"
|
||||
ACTION = "action"
|
||||
|
||||
SOURCE_LOCATION = "source_location"
|
||||
SOURCE_FILER = "source_filters"
|
||||
SOURCE_FORMAT = "source_format"
|
||||
FILENAME_PLACEHOLDER = "{filename}"
|
||||
EXT_PLACEHOLDER = "{ext}"
|
||||
TEMPLATE = "template"
|
||||
KEY = "key"
|
||||
|
||||
SOURCE_TYPE = "source_type"
|
||||
TS_TYPE = "ts_type"
|
||||
NAME = "name"
|
||||
|
||||
VALUE = "value"
|
||||
TAGS = "tags"
|
||||
TEXT_LINE = "text_line"
|
||||
TEXT_BOX = "text_box"
|
||||
DATETIME = "datetime"
|
||||
|
||||
PREFIX = "prefix"
|
||||
DELIMITER = "delimiter"
|
||||
STRICT = "strict"
|
||||
USE_CONTEXT = "use_context"
|
||||
ON_MISSING = "on_missing"
|
||||
|
||||
JSON = "json"
|
||||
XMP = "xmp"
|
||||
EXIF = "exif"
|
||||
ID3 = "id3"
|
||||
|
||||
MAP = "map"
|
||||
INVERSE_MAP = "inverse_map"
|
||||
|
||||
|
||||
class Actions(StrEnum):
|
||||
IMPORT_DATA = "import_data"
|
||||
ADD_DATA = "add_data"
|
||||
|
||||
|
||||
class OnMissing(StrEnum):
|
||||
PROMPT = "prompt"
|
||||
CREATE = "create"
|
||||
SKIP = "skip"
|
||||
|
||||
|
||||
class Instruction:
|
||||
def __init__(self) -> None:
|
||||
pass
|
||||
|
||||
|
||||
class AddFieldInstruction(Instruction):
|
||||
def __init__(self, content, name: FieldID, field_type: str) -> None:
|
||||
super().__init__()
|
||||
self.content = content
|
||||
self.name = name
|
||||
self.type = field_type
|
||||
|
||||
@override
|
||||
def __str__(self) -> str:
|
||||
return str(self.content)
|
||||
|
||||
|
||||
class AddTagInstruction(Instruction):
|
||||
def __init__(
|
||||
self,
|
||||
tag_strings: list[str],
|
||||
use_context: bool = True,
|
||||
strict: bool = False,
|
||||
on_missing: str = OnMissing.SKIP,
|
||||
prefix: str = "",
|
||||
) -> None:
|
||||
super().__init__()
|
||||
self.tag_strings = tag_strings
|
||||
self.use_context = use_context
|
||||
self.strict = strict
|
||||
self.on_missing = on_missing
|
||||
self.prefix = prefix
|
||||
|
||||
@override
|
||||
def __str__(self) -> str:
|
||||
return str(self.tag_strings)
|
||||
|
||||
|
||||
def get_macro_name(
|
||||
macro_path: Path,
|
||||
) -> str:
|
||||
"""Return the name of a macro, as read from the file.
|
||||
|
||||
Defaults to the filename if no name is declared or able to be read.
|
||||
|
||||
Args:
|
||||
macro_path (Path): The full path of the macro file.
|
||||
"""
|
||||
name = macro_path.name
|
||||
logger.info("[MacroParser] Parsing Macro for Name", macro_path=macro_path)
|
||||
|
||||
if not macro_path.exists():
|
||||
logger.error("[MacroParser] Macro path does not exist", macro_path=macro_path)
|
||||
return name
|
||||
|
||||
if not macro_path.exists():
|
||||
logger.error("[MacroParser] Filepath does not exist", macro_path=macro_path)
|
||||
return name
|
||||
|
||||
with open(macro_path) as f:
|
||||
try:
|
||||
macro = toml.load(f)
|
||||
name = str(macro.get("name", name))
|
||||
except toml.TomlDecodeError as e:
|
||||
logger.error("[MacroParser] Could not parse macro", macro_path=macro_path, error=e)
|
||||
logger.info("[MacroParser] Macro Name:", name=name, macro_path=macro_path)
|
||||
return name
|
||||
|
||||
|
||||
def parse_macro_file(
|
||||
macro_path: Path,
|
||||
filepath: Path,
|
||||
) -> list[Instruction]:
|
||||
"""Parse a macro file and return a list of actions for TagStudio to perform.
|
||||
|
||||
Args:
|
||||
macro_path (Path): The full path of the macro file.
|
||||
filepath (Path): The filepath associated with Entry being operated upon.
|
||||
"""
|
||||
results: list[Instruction] = []
|
||||
logger.info("[MacroParser] Parsing Macro", macro_path=macro_path, filepath=filepath)
|
||||
|
||||
if not macro_path.exists():
|
||||
logger.error("[MacroParser] Macro path does not exist", macro_path=macro_path)
|
||||
return results
|
||||
|
||||
if not filepath.exists():
|
||||
logger.error("[MacroParser] Filepath does not exist", filepath=filepath)
|
||||
return results
|
||||
|
||||
with open(macro_path) as f:
|
||||
try:
|
||||
macro = toml.load(f)
|
||||
except toml.TomlDecodeError as e:
|
||||
logger.error("[MacroParser] Could not parse macro", macro_path=macro_path, error=e)
|
||||
return results
|
||||
|
||||
logger.info(macro)
|
||||
|
||||
# Check Schema Version
|
||||
schema_ver = macro.get(SCHEMA_VERSION, 0)
|
||||
if not isinstance(schema_ver, int):
|
||||
logger.error(
|
||||
f"[MacroParser] Incorrect type for {SCHEMA_VERSION}, expected int",
|
||||
schema_ver=schema_ver,
|
||||
)
|
||||
return results
|
||||
|
||||
if schema_ver != 1:
|
||||
logger.error(f"[MacroParser] Unsupported Schema Version: {schema_ver}")
|
||||
return results
|
||||
|
||||
logger.info(f"[MacroParser] Schema Version: {schema_ver}")
|
||||
|
||||
# Load Triggers
|
||||
triggers = macro.get(TRIGGERS)
|
||||
if triggers and not isinstance(triggers, list):
|
||||
logger.error(
|
||||
f"[MacroParser] Incorrect type for {TRIGGERS}, expected list", triggers=triggers
|
||||
)
|
||||
|
||||
# Parse each action table
|
||||
for table_key in macro:
|
||||
if table_key in {SCHEMA_VERSION, TRIGGERS, NAME}:
|
||||
continue
|
||||
|
||||
logger.info("[MacroParser] Parsing Table", table_key=table_key)
|
||||
table: dict[str, Any] = macro[table_key]
|
||||
logger.info(table.keys())
|
||||
|
||||
# TODO: Replace with table conditionals
|
||||
source_filters: list[str] = table.get(SOURCE_FILER, [])
|
||||
conditions_met: bool = False
|
||||
if not source_filters:
|
||||
conditions_met = True
|
||||
else:
|
||||
for filter_ in source_filters:
|
||||
if glob.globmatch(filepath, filter_, flags=glob.GLOBSTAR):
|
||||
logger.info(
|
||||
f"[MacroParser] [{table_key}] "
|
||||
f'{SOURCE_FILER}" Met filter requirement: {filter_}'
|
||||
)
|
||||
conditions_met = True
|
||||
|
||||
if not conditions_met:
|
||||
logger.warning(
|
||||
f"[MacroParser] [{table_key}] File didn't meet any path filter requirement",
|
||||
filters=source_filters,
|
||||
filepath=filepath,
|
||||
)
|
||||
continue
|
||||
|
||||
action: str = table.get(ACTION, "")
|
||||
logger.info(f'[MacroParser] [{table_key}] "{ACTION}": {action}')
|
||||
|
||||
if action == Actions.IMPORT_DATA:
|
||||
results.extend(_import_data(table, table_key, filepath))
|
||||
elif action == Actions.ADD_DATA:
|
||||
results.extend(_add_data(table))
|
||||
|
||||
logger.info(results)
|
||||
return results
|
||||
|
||||
|
||||
def _import_data(table: dict[str, Any], table_key: str, filepath: Path) -> list[Instruction]:
|
||||
"""Process an import_data instruction and return a list of DataResults.
|
||||
|
||||
Importing data refers to importing data from a source external to TagStudio or any macro.
|
||||
"""
|
||||
results: list[Instruction] = []
|
||||
|
||||
source_format: str = str(table.get(SOURCE_FORMAT, ""))
|
||||
if not source_format:
|
||||
logger.error('[MacroParser] Parser Error: No "{SOURCE_FORMAT}" provided for table')
|
||||
logger.info(f'[MacroParser] [{table_key}] "{SOURCE_FORMAT}": {source_format}')
|
||||
|
||||
raw_source_location = str(table.get(SOURCE_LOCATION, ""))
|
||||
if FILENAME_PLACEHOLDER in raw_source_location:
|
||||
# logger.info(f"[MacroParser] Filename placeholder detected: {raw_source_location}")
|
||||
raw_source_location = raw_source_location.replace(FILENAME_PLACEHOLDER, str(filepath.stem))
|
||||
|
||||
if EXT_PLACEHOLDER in raw_source_location:
|
||||
# logger.info(f"[MacroParser] File extension placeholder detected: {raw_source_location}")
|
||||
# TODO: Make work with files that have multiple suffixes, like .tar.gz
|
||||
raw_source_location = raw_source_location.replace(
|
||||
EXT_PLACEHOLDER,
|
||||
str(filepath.suffix)[1:], # Remove leading "."
|
||||
)
|
||||
|
||||
if not raw_source_location.startswith(("/", "~")):
|
||||
# The source location must be relative to the given filepath
|
||||
source_location = filepath.parent / Path(raw_source_location)
|
||||
else:
|
||||
source_location = Path(raw_source_location)
|
||||
|
||||
logger.info(f'[MacroParser] [{table_key}] "{SOURCE_LOCATION}": {source_location}')
|
||||
|
||||
if not source_location.exists():
|
||||
logger.error(
|
||||
"[MacroParser] Sidecar filepath does not exist", source_location=source_location
|
||||
)
|
||||
return results
|
||||
|
||||
if source_format.lower() in JSON:
|
||||
logger.info("[MacroParser] Parsing JSON sidecar file", sidecar_path=source_location)
|
||||
with open(source_location, encoding="utf8") as f:
|
||||
json_dump = json.load(f)
|
||||
if not json_dump:
|
||||
logger.warning("[MacroParser] Empty JSON sidecar file")
|
||||
return results
|
||||
logger.info(json_dump.items())
|
||||
|
||||
for table_key, table_value in table.items():
|
||||
objects: list[dict[str, Any] | str] = []
|
||||
content_value = ""
|
||||
if isinstance(table_value, list):
|
||||
objects = table_value
|
||||
else:
|
||||
objects.append(table_value)
|
||||
for obj in objects:
|
||||
if not isinstance(obj, dict):
|
||||
continue
|
||||
ts_type: str = str(obj.get(TS_TYPE, ""))
|
||||
if not ts_type:
|
||||
logger.warning(
|
||||
f'[MacroParser] [{table_key}] No "{TS_TYPE}" key provided, skipping'
|
||||
)
|
||||
continue
|
||||
|
||||
json_key: str = str(obj.get(KEY, ""))
|
||||
if json_key and json_key in json_dump:
|
||||
json_value = json_dump.get(table_key)
|
||||
logger.info(
|
||||
f"[MacroParser] [{table_key}] Parsing JSON sidecar key",
|
||||
key=table_key,
|
||||
table_value=obj,
|
||||
json_value=json_value,
|
||||
)
|
||||
content_value = json_value
|
||||
|
||||
if not json_value or isinstance(json_value, str) and not json_value.strip():
|
||||
logger.warning(
|
||||
f"[MacroParser] [{table_key}] Value for key was empty, skipping"
|
||||
)
|
||||
continue
|
||||
|
||||
elif table_key == TEMPLATE:
|
||||
template: str = str(obj.get(TEMPLATE, ""))
|
||||
logger.info(f"[MacroParser] [{table_key}] Filling template", template=template)
|
||||
if not template:
|
||||
logger.warning(f"[MacroParser] [{table_key}] Empty template, skipping")
|
||||
continue
|
||||
for k in json_dump:
|
||||
template = _fill_template(template, json_dump, k)
|
||||
logger.info(f"[MacroParser] [{table_key}] Template filled!", template=template)
|
||||
content_value = template
|
||||
|
||||
else:
|
||||
continue
|
||||
|
||||
# TODO: Determine if the source_type is even really ever needed
|
||||
# source_type: str = str(tab_value.get(SOURCE_TYPE, ""))
|
||||
|
||||
str_name: str = str(obj.get(NAME, FieldID.NOTES.name))
|
||||
name: FieldID = FieldID.NOTES
|
||||
for fid in FieldID:
|
||||
field_id = str_name.upper().replace(" ", "_")
|
||||
if field_id == fid.name:
|
||||
name = fid
|
||||
continue
|
||||
|
||||
if ts_type == TAGS:
|
||||
use_context: bool = bool(obj.get(USE_CONTEXT, False))
|
||||
on_missing: str = str(obj.get(ON_MISSING, OnMissing.SKIP))
|
||||
strict: bool = bool(obj.get(STRICT, False))
|
||||
delimiter: str = ""
|
||||
|
||||
tag_strings: list[str] = []
|
||||
# Tags are part of a single string
|
||||
if isinstance(content_value, str):
|
||||
delimiter = str(obj.get(DELIMITER, ""))
|
||||
if delimiter:
|
||||
# Split string based on given delimiter
|
||||
tag_strings = content_value.split(delimiter)
|
||||
else:
|
||||
# If no delimiter is provided, assume the string is a single tag
|
||||
tag_strings.append(content_value)
|
||||
elif isinstance(content_value, bool):
|
||||
tag_strings = [str(content_value)]
|
||||
elif isinstance(content_value, list):
|
||||
tag_strings = [str(v) for v in content_value] # pyright: ignore[reportUnknownVariableType]
|
||||
else:
|
||||
tag_strings = deepcopy([content_value])
|
||||
|
||||
# Remove a prefix (if given) from all tags strings (if any)
|
||||
prefix = str(obj.get(PREFIX, ""))
|
||||
if prefix:
|
||||
tag_strings = [t.lstrip(prefix) for t in tag_strings]
|
||||
|
||||
# Swap any mapped tags for their new tag values
|
||||
tag_map: dict[str, str] = obj.get(MAP, {})
|
||||
mapped: list[str] = []
|
||||
if tag_map:
|
||||
for map_key, map_value in tag_map.items():
|
||||
if map_key in tag_strings:
|
||||
logger.info("[MacroParser] Mapping tag", old=map_key, new=map_value)
|
||||
if isinstance(map_value, list):
|
||||
mapped.extend(map_value)
|
||||
else:
|
||||
mapped.append(map_value)
|
||||
tag_strings.remove(map_key)
|
||||
tag_strings.extend(mapped)
|
||||
|
||||
tag_strings = [t.strip() for t in tag_strings if t.strip()]
|
||||
|
||||
logger.info("[MacroParser] Found tags", tag_strings=tag_strings)
|
||||
results.append(
|
||||
AddTagInstruction(
|
||||
tag_strings=tag_strings,
|
||||
use_context=use_context,
|
||||
strict=strict,
|
||||
on_missing=on_missing,
|
||||
prefix="",
|
||||
)
|
||||
)
|
||||
|
||||
elif ts_type in (TEXT_LINE, TEXT_BOX, DATETIME):
|
||||
results.append(
|
||||
AddFieldInstruction(content=content_value, name=name, field_type=ts_type)
|
||||
)
|
||||
else:
|
||||
logger.error('[MacroParser] [{table_key}] Unknown "{TS_TYPE}"', type=ts_type)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def _add_data(table: dict[str, Any]) -> list[Instruction]:
|
||||
"""Process an add_data instruction and return a list of DataResults.
|
||||
|
||||
Adding data refers to adding data defined inside a TagStudio macro, not from an external source.
|
||||
"""
|
||||
results: list[Instruction] = []
|
||||
logger.error(table)
|
||||
for table_value in table.values():
|
||||
objects: list[dict[str, Any] | str] = []
|
||||
if isinstance(table_value, list):
|
||||
objects = table_value
|
||||
else:
|
||||
objects.append(table_value)
|
||||
for obj in objects:
|
||||
if not isinstance(obj, dict):
|
||||
continue
|
||||
ts_type = obj.get(TS_TYPE, "")
|
||||
if ts_type == TAGS:
|
||||
tag_strings: list[str] = obj.get(VALUE, [])
|
||||
logger.error(tag_strings)
|
||||
results.append(
|
||||
AddTagInstruction(
|
||||
tag_strings=tag_strings,
|
||||
use_context=False,
|
||||
)
|
||||
)
|
||||
elif ts_type in (TEXT_LINE, TEXT_BOX, DATETIME):
|
||||
str_name: str = str(obj.get(NAME, FieldID.NOTES.name))
|
||||
name: FieldID = FieldID.NOTES
|
||||
for fid in FieldID:
|
||||
field_id = str_name.upper().replace(" ", "_")
|
||||
if field_id == fid.name:
|
||||
name = fid
|
||||
continue
|
||||
|
||||
content_value: str = str(obj.get(VALUE, ""))
|
||||
results.append(
|
||||
AddFieldInstruction(content=content_value, name=name, field_type=ts_type)
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def _fill_template(
|
||||
template: str, table: dict[str, Any], table_key: str, template_key: str = ""
|
||||
) -> str:
|
||||
"""Replaces placeholder keys in a string with the value from that table.
|
||||
|
||||
Args:
|
||||
template (str): The string containing placeholder keys.
|
||||
Key names should be surrounded in curly braces. (e.g. "{key}").
|
||||
Nested keys are accessed by separating the keys with a dot (e.g. "{key.nested_key}").
|
||||
table (dict[str, Any]): The table to lookup values from.
|
||||
table_key (str): The key to search for in the template and access the table with.
|
||||
template_key (str): Similar to table_key, but is not used for accessing the table and
|
||||
is instead used for representing the template key syntax for nested keys.
|
||||
Used in recursive calls.
|
||||
"""
|
||||
key = template_key or table_key
|
||||
value = table.get(table_key, "")
|
||||
|
||||
if isinstance(value, dict):
|
||||
for v in value:
|
||||
# NOTE: This f-string is the only thing defining how the nested key syntax works.
|
||||
# If instead you wanted to use key[nested] syntax for example, use: f"{key}[{str(v)}]"
|
||||
normalized_key: str = f"{key}.{str(v)}"
|
||||
template = _fill_template(template, value, str(v), normalized_key)
|
||||
|
||||
value = str(value)
|
||||
return template.replace(f"{{{key}}}", f"{value}")
|
||||
|
||||
|
||||
def exec_instructions(library: "Library", entry_id: int, results: list[Instruction]) -> None:
|
||||
for result in results:
|
||||
if isinstance(result, AddTagInstruction):
|
||||
_exec_add_tag(library, entry_id, result)
|
||||
elif isinstance(result, AddFieldInstruction):
|
||||
_exec_add_field(library, entry_id, result)
|
||||
|
||||
|
||||
def _exec_add_tag(library: "Library", entry_id: int, result: AddTagInstruction):
|
||||
tag_ids: set[int] = set()
|
||||
for string in result.tag_strings:
|
||||
if not string.strip():
|
||||
continue
|
||||
string = string.replace("_", " ")
|
||||
base_and_parent = string.split("(")
|
||||
parent = ""
|
||||
base = base_and_parent[0].strip(" ")
|
||||
parent_results: list[int] = []
|
||||
if len(base_and_parent) > 1:
|
||||
parent = base_and_parent[1].split(")")[0]
|
||||
r: list[set[Tag]] = library.search_tags(name=parent, limit=-1)
|
||||
if len(r) > 0:
|
||||
parent_results = [t.id for t in r[0]]
|
||||
# NOTE: The following code overlaps with update_tags() in tag_search.py
|
||||
# Sort and prioritize the results
|
||||
tag_results: list[set[Tag]] = library.search_tags(name=base, limit=-1)
|
||||
results_0 = list(tag_results[0])
|
||||
results_0.sort(key=lambda tag: tag.name.lower())
|
||||
results_1 = list(tag_results[1])
|
||||
results_1.sort(key=lambda tag: tag.name.lower())
|
||||
raw_results = list(results_0 + results_1)
|
||||
priority_results: set[Tag] = set()
|
||||
|
||||
for tag in raw_results:
|
||||
if tag.name.lower().startswith(base.strip().lower()):
|
||||
priority_results.add(tag)
|
||||
all_results = sorted(list(priority_results), key=lambda tag: len(tag.name)) + [
|
||||
r for r in raw_results if r not in priority_results
|
||||
]
|
||||
|
||||
if parent and parent_results:
|
||||
filtered_parents: list[Tag] = []
|
||||
for tag in all_results:
|
||||
for p_id in tag.parent_ids:
|
||||
if p_id in parent_results:
|
||||
filtered_parents.append(tag)
|
||||
break
|
||||
all_results = [t for t in all_results if t in filtered_parents]
|
||||
|
||||
final_tag: Tag | None = None
|
||||
if len(all_results) > 0:
|
||||
final_tag = all_results[0]
|
||||
if final_tag:
|
||||
tag_ids.add(final_tag.id)
|
||||
|
||||
if not tag_ids:
|
||||
return
|
||||
|
||||
library.add_tags_to_entries(entry_id, tag_ids)
|
||||
|
||||
|
||||
def _exec_add_field(library: "Library", entry_id: int, result: AddFieldInstruction):
|
||||
library.add_field_to_entry(
|
||||
entry_id, field_id=result.name, value=result.content, skip_on_exists=True
|
||||
)
|
||||
@@ -1,183 +0,0 @@
|
||||
# Copyright (C) 2024 Travis Abendshien (CyanVoxel).
|
||||
# Licensed under the GPL-3.0 License.
|
||||
# Created for TagStudio: https://github.com/CyanVoxel/TagStudio
|
||||
|
||||
"""The core classes and methods of TagStudio."""
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
import structlog
|
||||
|
||||
from tagstudio.core.constants import TS_FOLDER_NAME
|
||||
from tagstudio.core.library.alchemy.fields import FieldID
|
||||
from tagstudio.core.library.alchemy.library import Library
|
||||
from tagstudio.core.library.alchemy.models import Entry
|
||||
|
||||
logger = structlog.get_logger(__name__)
|
||||
|
||||
|
||||
class TagStudioCore:
|
||||
def __init__(self):
|
||||
self.lib: Library = Library()
|
||||
|
||||
@classmethod
|
||||
def get_gdl_sidecar(cls, filepath: Path, source: str = "") -> dict:
|
||||
"""Attempt to open and dump a Gallery-DL Sidecar file for the filepath.
|
||||
|
||||
Return a formatted object with notable values or an empty object if none is found.
|
||||
"""
|
||||
info = {}
|
||||
_filepath = filepath.parent / (filepath.name + ".json")
|
||||
|
||||
# NOTE: This fixes an unknown (recent?) bug in Gallery-DL where Instagram sidecar
|
||||
# files may be downloaded with indices starting at 1 rather than 0, unlike the posts.
|
||||
# This may only occur with sidecar files that are downloaded separate from posts.
|
||||
if source == "instagram" and not _filepath.is_file():
|
||||
newstem = _filepath.stem[:-16] + "1" + _filepath.stem[-15:]
|
||||
_filepath = _filepath.parent / (newstem + ".json")
|
||||
|
||||
logger.info("get_gdl_sidecar", filepath=filepath, source=source, sidecar=_filepath)
|
||||
|
||||
try:
|
||||
with open(_filepath, encoding="utf8") as f:
|
||||
json_dump = json.load(f)
|
||||
if not json_dump:
|
||||
return {}
|
||||
|
||||
if source == "twitter":
|
||||
info[FieldID.DESCRIPTION] = json_dump["content"].strip()
|
||||
info[FieldID.DATE_PUBLISHED] = json_dump["date"]
|
||||
elif source == "instagram":
|
||||
info[FieldID.DESCRIPTION] = json_dump["description"].strip()
|
||||
info[FieldID.DATE_PUBLISHED] = json_dump["date"]
|
||||
elif source == "artstation":
|
||||
info[FieldID.TITLE] = json_dump["title"].strip()
|
||||
info[FieldID.ARTIST] = json_dump["user"]["full_name"].strip()
|
||||
info[FieldID.DESCRIPTION] = json_dump["description"].strip()
|
||||
info[FieldID.TAGS] = json_dump["tags"]
|
||||
# info["tags"] = [x for x in json_dump["mediums"]["name"]]
|
||||
info[FieldID.DATE_PUBLISHED] = json_dump["date"]
|
||||
elif source == "newgrounds":
|
||||
# info["title"] = json_dump["title"]
|
||||
# info["artist"] = json_dump["artist"]
|
||||
# info["description"] = json_dump["description"]
|
||||
info[FieldID.TAGS] = json_dump["tags"]
|
||||
info[FieldID.DATE_PUBLISHED] = json_dump["date"]
|
||||
info[FieldID.ARTIST] = json_dump["user"].strip()
|
||||
info[FieldID.DESCRIPTION] = json_dump["description"].strip()
|
||||
info[FieldID.SOURCE] = json_dump["post_url"].strip()
|
||||
|
||||
except Exception:
|
||||
logger.exception("Error handling sidecar file.", path=_filepath)
|
||||
|
||||
return info
|
||||
|
||||
# def scrape(self, entry_id):
|
||||
# entry = self.lib.get_entry(entry_id)
|
||||
# if entry.fields:
|
||||
# urls: list[str] = []
|
||||
# if self.lib.get_field_index_in_entry(entry, 21):
|
||||
# urls.extend([self.lib.get_field_attr(entry.fields[x], 'content')
|
||||
# for x in self.lib.get_field_index_in_entry(entry, 21)])
|
||||
# if self.lib.get_field_index_in_entry(entry, 3):
|
||||
# urls.extend([self.lib.get_field_attr(entry.fields[x], 'content')
|
||||
# for x in self.lib.get_field_index_in_entry(entry, 3)])
|
||||
# # try:
|
||||
# if urls:
|
||||
# for url in urls:
|
||||
# url = "https://" + url if 'https://' not in url else url
|
||||
# html_doc = requests.get(url).text
|
||||
# soup = bs(html_doc, "html.parser")
|
||||
# print(soup)
|
||||
# input()
|
||||
|
||||
# # except:
|
||||
# # # print("Could not resolve URL.")
|
||||
# # pass
|
||||
|
||||
@classmethod
|
||||
def match_conditions(cls, lib: Library, entry_id: int) -> bool:
|
||||
"""Match defined conditions against a file to add Entry data."""
|
||||
# TODO - what even is this file format?
|
||||
# TODO: Make this stored somewhere better instead of temporarily in this JSON file.
|
||||
cond_file = lib.library_dir / TS_FOLDER_NAME / "conditions.json"
|
||||
if not cond_file.is_file():
|
||||
return False
|
||||
|
||||
entry: Entry = lib.get_entry(entry_id)
|
||||
|
||||
try:
|
||||
with open(cond_file, encoding="utf8") as f:
|
||||
json_dump = json.load(f)
|
||||
for c in json_dump["conditions"]:
|
||||
match: bool = False
|
||||
for path_c in c["path_conditions"]:
|
||||
if Path(path_c).is_relative_to(entry.path):
|
||||
match = True
|
||||
break
|
||||
|
||||
if not match:
|
||||
return False
|
||||
|
||||
if not c.get("fields"):
|
||||
return False
|
||||
|
||||
fields = c["fields"]
|
||||
entry_field_types = {field.type_key: field for field in entry.fields}
|
||||
|
||||
for field in fields:
|
||||
is_new = field["id"] not in entry_field_types
|
||||
field_key = field["id"]
|
||||
if is_new:
|
||||
lib.add_field_to_entry(entry.id, field_key, field["value"])
|
||||
else:
|
||||
lib.update_entry_field(entry.id, field_key, field["value"])
|
||||
|
||||
except Exception:
|
||||
logger.exception("Error matching conditions.", entry=entry)
|
||||
|
||||
return False
|
||||
|
||||
@classmethod
|
||||
def build_url(cls, entry: Entry, source: str):
|
||||
"""Try to rebuild a source URL given a specific filename structure."""
|
||||
source = source.lower().replace("-", " ").replace("_", " ")
|
||||
if "twitter" in source:
|
||||
return cls._build_twitter_url(entry)
|
||||
elif "instagram" in source:
|
||||
return cls._build_instagram_url(entry)
|
||||
|
||||
@classmethod
|
||||
def _build_twitter_url(cls, entry: Entry):
|
||||
"""Build a Twitter URL given a specific filename structure.
|
||||
|
||||
Method expects filename to be formatted as 'USERNAME_TWEET-ID_INDEX_YEAR-MM-DD'
|
||||
"""
|
||||
try:
|
||||
stubs = str(entry.path.name).rsplit("_", 3)
|
||||
url = f"www.twitter.com/{stubs[0]}/status/{stubs[-3]}/photo/{stubs[-2]}"
|
||||
return url
|
||||
except Exception:
|
||||
logger.exception("Error building Twitter URL.", entry=entry)
|
||||
return ""
|
||||
|
||||
@classmethod
|
||||
def _build_instagram_url(cls, entry: Entry):
|
||||
"""Build an Instagram URL given a specific filename structure.
|
||||
|
||||
Method expects filename to be formatted as 'USERNAME_POST-ID_INDEX_YEAR-MM-DD'
|
||||
"""
|
||||
try:
|
||||
stubs = str(entry.path.name).rsplit("_", 2)
|
||||
# stubs[0] = stubs[0].replace(f"{author}_", '', 1)
|
||||
# print(stubs)
|
||||
# NOTE: Both Instagram usernames AND their ID can have underscores in them,
|
||||
# so unless you have the exact username (which can change) on hand to remove,
|
||||
# your other best bet is to hope that the ID is only 11 characters long, which
|
||||
# seems to more or less be the case... for now...
|
||||
url = f"www.instagram.com/p/{stubs[-3][-11:]}"
|
||||
return url
|
||||
except Exception:
|
||||
logger.exception("Error building Instagram URL.", entry=entry)
|
||||
return ""
|
||||
@@ -21,12 +21,14 @@ from pathlib import Path
|
||||
from queue import Queue
|
||||
from shutil import which
|
||||
from typing import Generic, TypeVar
|
||||
from unittest.mock import Mock
|
||||
from warnings import catch_warnings
|
||||
|
||||
import structlog
|
||||
from humanfriendly import format_size, format_timespan
|
||||
from PySide6.QtCore import QObject, QSettings, Qt, QThread, QThreadPool, QTimer, Signal
|
||||
from PySide6.QtGui import (
|
||||
QAction,
|
||||
QColor,
|
||||
QDragEnterEvent,
|
||||
QDragMoveEvent,
|
||||
@@ -45,29 +47,37 @@ from PySide6.QtWidgets import (
|
||||
QScrollArea,
|
||||
)
|
||||
|
||||
import tagstudio.qt.resources_rc # noqa: F401
|
||||
from tagstudio.core.constants import TAG_ARCHIVED, TAG_FAVORITE, VERSION, VERSION_BRANCH
|
||||
# This import has side-effect of import PySide resources
|
||||
import tagstudio.qt.resources_rc # noqa: F401 # pyright: ignore [reportUnusedImport]
|
||||
from tagstudio.core.constants import (
|
||||
MACROS_FOLDER_NAME,
|
||||
TAG_ARCHIVED,
|
||||
TAG_FAVORITE,
|
||||
TS_FOLDER_NAME,
|
||||
VERSION,
|
||||
VERSION_BRANCH,
|
||||
)
|
||||
from tagstudio.core.driver import DriverMixin
|
||||
from tagstudio.core.enums import MacroID, SettingItems, ShowFilepathOption
|
||||
from tagstudio.core.enums import SettingItems, ShowFilepathOption
|
||||
from tagstudio.core.library.alchemy.enums import (
|
||||
BrowsingState,
|
||||
FieldTypeEnum,
|
||||
SortingModeEnum,
|
||||
)
|
||||
from tagstudio.core.library.alchemy.fields import FieldID
|
||||
from tagstudio.core.library.alchemy.library import Library, LibraryStatus
|
||||
from tagstudio.core.library.alchemy.models import Entry
|
||||
from tagstudio.core.library.ignore import Ignore
|
||||
from tagstudio.core.library.refresh import RefreshTracker
|
||||
from tagstudio.core.macro_parser import (
|
||||
Instruction,
|
||||
exec_instructions,
|
||||
get_macro_name,
|
||||
parse_macro_file,
|
||||
)
|
||||
from tagstudio.core.media_types import MediaCategories
|
||||
from tagstudio.core.query_lang.util import ParsingError
|
||||
from tagstudio.core.ts_core import TagStudioCore
|
||||
from tagstudio.core.utils.str_formatting import strip_web_protocol
|
||||
from tagstudio.core.utils.types import unwrap
|
||||
from tagstudio.qt.cache_manager import CacheManager
|
||||
from tagstudio.qt.controllers.ffmpeg_missing_message_box import FfmpegMissingMessageBox
|
||||
|
||||
# this import has side-effect of import PySide resources
|
||||
from tagstudio.qt.controllers.fix_ignored_modal_controller import FixIgnoredEntriesModal
|
||||
from tagstudio.qt.controllers.ignore_modal_controller import IgnoreModal
|
||||
from tagstudio.qt.controllers.library_info_window_controller import LibraryInfoWindow
|
||||
@@ -96,6 +106,7 @@ from tagstudio.qt.resource_manager import ResourceManager
|
||||
from tagstudio.qt.translations import Translations
|
||||
from tagstudio.qt.utils.custom_runnable import CustomRunnable
|
||||
from tagstudio.qt.utils.file_deleter import delete_file
|
||||
from tagstudio.qt.utils.file_opener import open_file
|
||||
from tagstudio.qt.utils.function_iterator import FunctionIterator
|
||||
from tagstudio.qt.views.main_window import MainWindow
|
||||
from tagstudio.qt.views.panel_modal import PanelModal
|
||||
@@ -308,10 +319,20 @@ class QtDriver(DriverMixin, QObject):
|
||||
pal: QPalette = self.app.palette()
|
||||
pal.setColor(QPalette.ColorGroup.Normal, QPalette.ColorRole.Window, QColor("#1e1e1e"))
|
||||
pal.setColor(QPalette.ColorGroup.Normal, QPalette.ColorRole.Button, QColor("#1e1e1e"))
|
||||
pal.setColor(QPalette.ColorGroup.Inactive, QPalette.ColorRole.Window, QColor("#232323"))
|
||||
pal.setColor(QPalette.ColorGroup.Inactive, QPalette.ColorRole.Button, QColor("#232323"))
|
||||
pal.setColor(
|
||||
QPalette.ColorGroup.Inactive, QPalette.ColorRole.ButtonText, QColor("#666666")
|
||||
QPalette.ColorGroup.Inactive,
|
||||
QPalette.ColorRole.Window,
|
||||
QColor("#232323"),
|
||||
)
|
||||
pal.setColor(
|
||||
QPalette.ColorGroup.Inactive,
|
||||
QPalette.ColorRole.Button,
|
||||
QColor("#232323"),
|
||||
)
|
||||
pal.setColor(
|
||||
QPalette.ColorGroup.Inactive,
|
||||
QPalette.ColorRole.ButtonText,
|
||||
QColor("#666666"),
|
||||
)
|
||||
|
||||
self.app.setPalette(pal)
|
||||
@@ -534,6 +555,8 @@ class QtDriver(DriverMixin, QObject):
|
||||
# endregion
|
||||
|
||||
# region Macros Menu ==========================================================
|
||||
self.main_window.menu_bar.macros_menu.aboutToShow.connect(self.update_macros_menu)
|
||||
|
||||
def create_folders_tags_modal():
|
||||
if not hasattr(self, "folders_modal"):
|
||||
self.folders_modal = FoldersToTagsModal(self.lib, self)
|
||||
@@ -555,8 +578,6 @@ class QtDriver(DriverMixin, QObject):
|
||||
|
||||
# endregion
|
||||
|
||||
# endregion
|
||||
|
||||
self.main_window.search_field.textChanged.connect(self.update_completions_list)
|
||||
|
||||
self.main_window.preview_panel.field_containers_widget.archived_updated.connect(
|
||||
@@ -748,6 +769,7 @@ class QtDriver(DriverMixin, QObject):
|
||||
|
||||
self.set_clipboard_menu_viability()
|
||||
self.set_select_actions_visibility()
|
||||
self.update_macros_menu(clear=True)
|
||||
|
||||
if hasattr(self, "library_info_window"):
|
||||
self.library_info_window.close()
|
||||
@@ -782,7 +804,8 @@ class QtDriver(DriverMixin, QObject):
|
||||
end_time = time.time()
|
||||
self.main_window.status_bar.showMessage(
|
||||
Translations.format(
|
||||
"status.library_closed", time_span=format_timespan(end_time - start_time)
|
||||
"status.library_closed",
|
||||
time_span=format_timespan(end_time - start_time),
|
||||
)
|
||||
)
|
||||
|
||||
@@ -1075,56 +1098,42 @@ class QtDriver(DriverMixin, QObject):
|
||||
# # self.run_macro('autofill', id)
|
||||
yield 0
|
||||
|
||||
def run_macros(self, name: MacroID, entry_ids: list[int]):
|
||||
"""Run a specific Macro on a group of given entry_ids."""
|
||||
def run_macros(self, macro_name: str, entry_ids: list[int]):
|
||||
"""Run a Macro on a list of entires."""
|
||||
for entry_id in entry_ids:
|
||||
self.run_macro(name, entry_id)
|
||||
self.run_macro(macro_name, entry_id)
|
||||
self.main_window.preview_panel.refresh_selection(update_preview=False)
|
||||
|
||||
def run_macro(self, macro_name: str, entry_id: int):
|
||||
"""Run a Macro on a single entry."""
|
||||
if not self.lib.library_dir:
|
||||
logger.error("[QtDriver] Can't run macro when no library is open!")
|
||||
return
|
||||
|
||||
entry: Entry | None = self.lib.get_entry(entry_id)
|
||||
if not entry:
|
||||
logger.error(f"[QtDriver] No Entry given ID {entry_id}!")
|
||||
return
|
||||
|
||||
def run_macro(self, name: MacroID, entry_id: int):
|
||||
"""Run a specific Macro on an Entry given a Macro name."""
|
||||
entry: Entry = self.lib.get_entry(entry_id)
|
||||
full_path = self.lib.library_dir / entry.path
|
||||
source = "" if entry.path.parent == Path(".") else entry.path.parts[0].lower()
|
||||
# macro_path = Path(
|
||||
# self.lib.library_dir / TS_FOLDER_NAME / MACROS_FOLDER_NAME / f"{macro_name}.toml"
|
||||
# )
|
||||
macro_path = Path(self.lib.library_dir / TS_FOLDER_NAME / MACROS_FOLDER_NAME / macro_name)
|
||||
|
||||
logger.info(
|
||||
"running macro",
|
||||
source=source,
|
||||
macro=name,
|
||||
"[QtDriver] Running Macro",
|
||||
macro_path=macro_name,
|
||||
entry_id=entry.id,
|
||||
grid_idx=entry_id,
|
||||
)
|
||||
|
||||
if name == MacroID.AUTOFILL:
|
||||
for macro_id in MacroID:
|
||||
if macro_id == MacroID.AUTOFILL:
|
||||
continue
|
||||
self.run_macro(macro_id, entry_id)
|
||||
results: list[Instruction] = parse_macro_file(macro_path, full_path)
|
||||
exec_instructions(self.lib, entry_id, results)
|
||||
|
||||
elif name == MacroID.SIDECAR:
|
||||
parsed_items = TagStudioCore.get_gdl_sidecar(full_path, source)
|
||||
for field_id, value in parsed_items.items():
|
||||
if isinstance(value, list) and len(value) > 0 and isinstance(value[0], str):
|
||||
value = self.lib.tag_from_strings(value)
|
||||
self.lib.add_field_to_entry(
|
||||
entry.id,
|
||||
field_id=field_id,
|
||||
value=value,
|
||||
)
|
||||
|
||||
elif name == MacroID.BUILD_URL:
|
||||
url = TagStudioCore.build_url(entry, source)
|
||||
if url is not None:
|
||||
self.lib.add_field_to_entry(entry.id, field_id=FieldID.SOURCE, value=url)
|
||||
elif name == MacroID.MATCH:
|
||||
TagStudioCore.match_conditions(self.lib, entry.id)
|
||||
elif name == MacroID.CLEAN_URL:
|
||||
for field in entry.text_fields:
|
||||
if field.type.type == FieldTypeEnum.TEXT_LINE and field.value:
|
||||
self.lib.update_entry_field(
|
||||
entry_ids=entry.id,
|
||||
field=field,
|
||||
content=strip_web_protocol(field.value),
|
||||
)
|
||||
@property
|
||||
def sorting_direction(self) -> bool:
|
||||
"""Whether to Sort the results in ascending order."""
|
||||
return self.main_window.sorting_direction_combobox.currentData()
|
||||
|
||||
def sorting_direction_callback(self):
|
||||
logger.info("Sorting Direction Changed", ascending=self.main_window.sorting_direction)
|
||||
@@ -1248,6 +1257,12 @@ class QtDriver(DriverMixin, QObject):
|
||||
|
||||
self.main_window.preview_panel.set_selection(self.selected)
|
||||
|
||||
# TODO: Remove?
|
||||
def set_macro_menu_viability(self):
|
||||
# for action in self.macros_menu.actions():
|
||||
# action.setDisabled(not self.selected)
|
||||
pass
|
||||
|
||||
def set_clipboard_menu_viability(self):
|
||||
if len(self.selected) == 1:
|
||||
self.main_window.menu_bar.copy_fields_action.setEnabled(True)
|
||||
@@ -1280,7 +1295,8 @@ class QtDriver(DriverMixin, QObject):
|
||||
|
||||
def update_completions_list(self, text: str) -> None:
|
||||
matches = re.search(
|
||||
r"((?:.* )?)(mediatype|filetype|path|tag|tag_id):(\"?[A-Za-z0-9\ \t]+\"?)?", text
|
||||
r"((?:.* )?)(mediatype|filetype|path|tag|tag_id):(\"?[A-Za-z0-9\ \t]+\"?)?",
|
||||
text,
|
||||
)
|
||||
|
||||
completion_list: list[str] = []
|
||||
@@ -1519,6 +1535,53 @@ class QtDriver(DriverMixin, QObject):
|
||||
self.cached_values.sync()
|
||||
self.update_recent_lib_menu()
|
||||
|
||||
def update_macros_menu(self, clear: bool = False):
|
||||
if not self.main_window.menu_bar.macros_menu or isinstance(
|
||||
self.main_window.menu_bar.macros_menu, Mock
|
||||
): # NOTE: Needed for tests?
|
||||
return
|
||||
|
||||
# Create actions for each macro
|
||||
actions: list[QAction] = []
|
||||
if self.lib.library_dir and not clear:
|
||||
macros_path = self.lib.library_dir / TS_FOLDER_NAME / MACROS_FOLDER_NAME
|
||||
for f in macros_path.glob("*"):
|
||||
logger.info(f)
|
||||
if f.suffix != ".toml" or f.is_dir() or f.name.startswith("._"):
|
||||
continue
|
||||
action = QAction(
|
||||
get_macro_name(f), self.main_window.menu_bar.macros_menu.parentWidget()
|
||||
)
|
||||
action.triggered.connect(
|
||||
lambda checked=False, name=f.name: (self.run_macros(name, self.selected)),
|
||||
)
|
||||
actions.append(action)
|
||||
|
||||
open_folder = QAction("Open Macros Folder...", self.main_window.menu_bar.macros_menu)
|
||||
open_folder.triggered.connect(self.open_macros_folder)
|
||||
actions.append(open_folder)
|
||||
|
||||
if clear:
|
||||
open_folder.setEnabled(False)
|
||||
|
||||
# Clear previous actions
|
||||
for action in self.main_window.menu_bar.macros_menu.actions():
|
||||
self.main_window.menu_bar.macros_menu.removeAction(action)
|
||||
|
||||
# Add new actions
|
||||
for action in actions:
|
||||
self.main_window.menu_bar.macros_menu.addAction(action)
|
||||
|
||||
self.main_window.menu_bar.macros_menu.addSeparator()
|
||||
self.main_window.menu_bar.macros_menu.addAction(open_folder)
|
||||
|
||||
def open_macros_folder(self):
|
||||
if not self.lib.library_dir:
|
||||
return
|
||||
path = self.lib.library_dir / TS_FOLDER_NAME / MACROS_FOLDER_NAME
|
||||
path.mkdir(exist_ok=True)
|
||||
open_file(path, file_manager=True, is_dir=True)
|
||||
|
||||
def open_settings_modal(self):
|
||||
SettingsPanel.build_modal(self).show()
|
||||
|
||||
@@ -1551,7 +1614,10 @@ class QtDriver(DriverMixin, QObject):
|
||||
except Exception as e:
|
||||
logger.error(e)
|
||||
open_status = LibraryStatus(
|
||||
success=False, library_path=path, message=type(e).__name__, msg_description=str(e)
|
||||
success=False,
|
||||
library_path=path,
|
||||
message=type(e).__name__,
|
||||
msg_description=str(e),
|
||||
)
|
||||
self.cache_manager = CacheManager(
|
||||
path,
|
||||
@@ -1598,6 +1664,7 @@ class QtDriver(DriverMixin, QObject):
|
||||
library_dir_display = self.lib.library_dir.name
|
||||
|
||||
self.update_libs_list(path)
|
||||
self.update_macros_menu()
|
||||
self.main_window.setWindowTitle(
|
||||
Translations.format(
|
||||
"app.title",
|
||||
@@ -1624,7 +1691,7 @@ class QtDriver(DriverMixin, QObject):
|
||||
self.main_window.menu_bar.folders_to_tags_action.setEnabled(True)
|
||||
self.main_window.menu_bar.library_info_action.setEnabled(True)
|
||||
|
||||
self.main_window.preview_panel.set_selection(self.selected)
|
||||
self.main_window.preview_panel.set_selection()
|
||||
|
||||
# page (re)rendering, extract eventually
|
||||
initial_state = BrowsingState(
|
||||
|
||||
@@ -20,13 +20,19 @@ from tagstudio.core.utils.types import unwrap
|
||||
logger = structlog.get_logger(__name__)
|
||||
|
||||
|
||||
def open_file(path: str | Path, file_manager: bool = False, windows_start_command: bool = False):
|
||||
def open_file(
|
||||
path: str | Path,
|
||||
file_manager: bool = False,
|
||||
is_dir: bool = False,
|
||||
windows_start_command: bool = False,
|
||||
):
|
||||
"""Open a file in the default application or file explorer.
|
||||
|
||||
Args:
|
||||
path (str): The path to the file to open.
|
||||
file_manager (bool, optional): Whether to open the file in the file manager
|
||||
(e.g. Finder on macOS). Defaults to False.
|
||||
is_dir (bool): True if the path points towards a directory, false if a file.
|
||||
windows_start_command (bool): Flag to determine if the older 'start' command should be used
|
||||
on Windows for opening files. This fixes issues on some systems in niche cases.
|
||||
"""
|
||||
@@ -77,7 +83,7 @@ def open_file(path: str | Path, file_manager: bool = False, windows_start_comman
|
||||
if sys.platform == "darwin":
|
||||
command_name = "open"
|
||||
command_args = [str(path)]
|
||||
if file_manager:
|
||||
if file_manager and not is_dir:
|
||||
# will reveal in Finder
|
||||
command_args.append("-R")
|
||||
else:
|
||||
|
||||
@@ -64,6 +64,7 @@ class PreviewPanelView(QWidget):
|
||||
super().__init__()
|
||||
self.lib = library
|
||||
|
||||
self._selected = []
|
||||
self.__thumb = PreviewThumb(self.lib, driver)
|
||||
self.__file_attrs = FileAttributes(self.lib, driver)
|
||||
self._fields = FieldContainers(
|
||||
@@ -132,7 +133,11 @@ class PreviewPanelView(QWidget):
|
||||
def _set_selection_callback(self):
|
||||
raise NotImplementedError()
|
||||
|
||||
def set_selection(self, selected: list[int], update_preview: bool = True):
|
||||
def refresh_selection(self, update_preview: bool = True):
|
||||
"""Refresh the panel's widgets to use current library data."""
|
||||
self.set_selection(self._selected, update_preview)
|
||||
|
||||
def set_selection(self, selected: list[int] | None = None, update_preview: bool = True):
|
||||
"""Render the panel widgets with the newest data from the Library.
|
||||
|
||||
Args:
|
||||
@@ -140,10 +145,10 @@ class PreviewPanelView(QWidget):
|
||||
update_preview (bool): Should the file preview be updated?
|
||||
(Only works with one or more items selected)
|
||||
"""
|
||||
self._selected = selected
|
||||
self._selected = selected or []
|
||||
try:
|
||||
# No Items Selected
|
||||
if len(selected) == 0:
|
||||
if len(self._selected) == 0:
|
||||
self.__thumb.hide_preview()
|
||||
self.__file_attrs.update_stats()
|
||||
self.__file_attrs.update_date_label()
|
||||
@@ -152,8 +157,8 @@ class PreviewPanelView(QWidget):
|
||||
self.add_buttons_enabled = False
|
||||
|
||||
# One Item Selected
|
||||
elif len(selected) == 1:
|
||||
entry_id = selected[0]
|
||||
elif len(self._selected) == 1:
|
||||
entry_id = self._selected[0]
|
||||
entry: Entry = unwrap(self.lib.get_entry(entry_id))
|
||||
|
||||
filepath: Path = unwrap(self.lib.library_dir) / entry.path
|
||||
@@ -169,10 +174,10 @@ class PreviewPanelView(QWidget):
|
||||
self.add_buttons_enabled = True
|
||||
|
||||
# Multiple Selected Items
|
||||
elif len(selected) > 1:
|
||||
elif len(self._selected) > 1:
|
||||
# items: list[Entry] = [self.lib.get_entry_full(x) for x in self.driver.selected]
|
||||
self.__thumb.hide_preview() # TODO: Render mixed selection
|
||||
self.__file_attrs.update_multi_selection(len(selected))
|
||||
self.__file_attrs.update_multi_selection(len(self._selected))
|
||||
self.__file_attrs.update_date_label()
|
||||
self._fields.hide_containers() # TODO: Allow for mixed editing
|
||||
|
||||
|
||||
@@ -145,6 +145,7 @@ def test_title_update(
|
||||
qt_driver.main_window.menu_bar.fix_dupe_files_action = QAction(menu_bar)
|
||||
qt_driver.main_window.menu_bar.clear_thumb_cache_action = QAction(menu_bar)
|
||||
qt_driver.main_window.menu_bar.folders_to_tags_action = QAction(menu_bar)
|
||||
qt_driver.main_window.menu_bar.macros_menu = None
|
||||
|
||||
# Trigger the update
|
||||
qt_driver._init_library(library_dir, open_status) # pyright: ignore[reportPrivateUsage]
|
||||
|
||||
Reference in New Issue
Block a user